Module-sys contains core system functionality of the MuditaOS
Table of content
The whole MuditaOS system can, with great simplification, be seen as a connection between:
NOTE main difference between application and service is that application has access to UI, while services don't and applications are working only the required time, while services are always dormant when inactive.
System manager is responsible for:
In order to create a new custom service, you have to inherit from the base Service class.
WARNING: Services are not started at the moment their constructor is called. This essentially means that sys::Bus and every functionality based on bus is not available yet.
The first moment when you are capable to call sys::Bus is InitHandler() of application. This is the place where your service initialization code should happen.
Then you have to implement several virtual methods:
This handler is invoked upon creation of the service.
NOTE It is very important to return proper return code specified in: enum class ReturnCodes{...};
Invoked upon destruction of the service. Should free allocated resources.
enum class ReturnCodes{...};It will be called after: ProcessCloseReasonHandler in which we can act upon different close scenarios.
This handler is invoked when there is request for switching specified service's power mode. Available power modes which service should support are specified below:
enum class ServicePowerMode{...};
Important: Currently there is no distinction between SuspendToRAM and SuspendsToNVM. These two cases should be handled the same. Additionally, only SuspendToNVM can be received by service.
NOTE Please just define this function empty if not required example:
sys::MessagePointer ServiceTest::DataReceivedHandler(sys::DataMessage *msgl, sys::ResponseMessage *resp)
{
return std::make_shared<sys::ResponseMessage>(sys::ReturnCodes::Unresolved);
}
It's invoked for all messages without designated handlers.
DataMessage* msg is not nullptr then it contains valid message which was sent by using blocking Bus API.ResponseMessage* resp is not nullptr then it contains valid message which was sent by using non-blocking Bus API.NOTE check "Caveats and good practices
Timers are system feature and is Service/Application thread safe. Timer callback life expectancy should be as short as possible for system to work smoothly (as with any other message call)
WARNING: Do not use FreeRTOS timers, without utmost care it can cause data races and obstruct sys::Timres and system in generall.
System has basic coarse sys::Timer capability. There are two ways to handle actions on timer:
void sys::timer::SystemTimer::connect(timer::TimerCallback &&newCallback)example:
th = sys::TimerFactory::createPeriodicTimer(
this, "my periodic timer", std::chrono::milliseconds(1000), [](sys::Timer &t) {
LOG_INFO("timers are avesome, periodic timer is active: %d", t.isActive());
});
th.start();
NOTE: System timers are RAII. These are automatically destructed when their handles are removed! NOTE: We do not have real-time system timers. It's possible to implement these, but there is no good mechanism to actually promote thread to be the first to execute in the system.
SystemTimer is GuiTimer are meant as a connector between System <=> GUI.GuiTimer one can connect it to gui::Item related element with Application::connect(GuiTimer &&, gui::Item*) so that timer life-cycle would be same as Item referencedWARNING: Please mind that safe usage of the bus is available after InitHandler() was called
The Bus subsystem was developed in order to allow cross-services communication. Preferred method to achieve service -> application communication is either:
The Bus enables us to:
The bus communication can be split into two different parts:
connect(...)s DataReceivedHandler() etc
All that services and applications do is essentially act and optionally respond to messages on the bus. There is literally no other way to perform any action in the system properly programmatically.
There are a few ways to handle messages on the bus:
connect(...) and disconnect(...) meant to provide an signal -> slot interaction. These handlers can be attached anywhere in the Service/Appasync_call(...) -> sync(...) meant to provide minimal, one time request, async capabilities. As we do not have std::promise it's a poor man implementation of such capabilities in the system.DataReceivedHandler(...) Please: do not use/extend DataReceivedHandler's promote whole service implementation in this funciton.M.P: This section is incomplete mainly due to not having enough info about implementation.
As far as I know workers were created as abstraction over FreeRTOS threads and are designed to tightly cooperate with services. By design their lifecycle is controlled by services hence this relationship is a little similar to process(service)-thread(worker) except that workers don't share memory/resources with parent service. They are also separate units of processing as they don't know anything about each other.
From my understanding they are mostly used as a mean of unloading services from doing cpu-intensive work which could block service's DataReceivedHandler for too long.
For examples of using them please check application code as it seems that's where they are used the most.
MuditaOS power management was designed and developed based on Linux Power Management subsystem. There are three main assumptions:
Most information about design and implementation can be found in AND_0011_PowerManager.
Additionally, the current implementation of PowerManager(it should be considered as the first iteration of development, and absolutely it cannot be treated as a final solution) is very simple, but it proved to be working, and it fulfilled current requirements. For the time being PowerManager class exposes two methods which are internally used by SystemManager:
Done via: int32_t Switch(const Mode mode)
This method allows for switching CPU to different modes which are listed below:
enum class Mode{
FullSpeed,
LowPowerRun,
LowPowerIdle,
Suspend
};
In current implementation only FullSpeed and LowPowerIdle modes are used. It is worth to note that LowPowerIdle is heavy customized, and it absolutely doesn't correspond to low power idle mode from RT1051's data sheet. Main differences are:
Actual code is implemented in module-bsp/board/rt1051/bsp/lpm/RT1051LPM.cpp and module-bsp/board/rt1051/common/clock_config.cpp.
The research was done regarding usage of the Suspend state, and it resulted in several conclusions:
Suspend state in order to fulfill business requirements (5mA in aeroplane mode)Based on above use of Suspend mode is not necessary and was dropped.
LowPowerRun mode is only listed for convenience as it is not even implemented.
Done via: int32_t PowerOff()
This method is used to turn off power supply to system. It is invoked by SystemManager after successful system close.
This is optional feature but by implementing it we will be able to limit current consumption during normal operation.
Currently whole mechanism of switching services into low power mode is very simple. There are no additional checks being made and there is no possibility to disable auto-lock feature and so on. Another thing is that most of the low-power logic is placed into ApplicationManager which unfortunately is bad design choice. It is to be considered if current solution will be sufficient or not. It is very possible that more advanced mechanism of communication between PowerManager and system logic will have to be designed and developed.
Currently core voltage during operating in low power mode is set to 0.9V. This results in 2,08mA current consumption. This can be lowered even further by lowering it to 0.8V ( resulting in current consumed at around 1.86mA). As far as I know setting core voltage to 0.8V is considered to be unstable, but it is worth trying/testing.
Core voltage is set in LPM_EnterLowPowerIdle function which can be found in module-bsp/board/rt1051/common/clock_config.cpp. For more info please check RT1051 Reference Manual, Chapter 18 "DCDC Converter" and DCDC Register 3.
If you ever set empty shared pointer to the response on message - the handling won't respond to caller This has very serious consequences:
async_call to it - as there will be no responseNOTE: This might render your application/service useless in some scenarions NOTE: you can see system messages handled by services unlocking flag in debug.hpp
Blocking requests have very serious aftermath.
connect(...) insteadIt is okay to do some minor blocking tasks but blocking for longer periods will make whole system unpredictable or behave
in very uncontrolled manner. It is perfectly fine to use Bus::SendUnicast blocking API in the handler.
Some time ago second parameter was added to DataReceivedHandler
DataReceivedHandler(DataMessage* msg, ResponseMessage* resp)
This addition was dictated by need of using non-blocking variant of Bus::SendUnicast which then can trigger receiving async
response. In this case second parameter is not nullptr used and should contain valid ResponseMessage, otherwise
ResponseMessage* resp will be set to nullptr. Additionally, by design there won't be the situation where both DataMessage* msg
and ResponseMessage* resp are not nullptr. Users of services should be aware of that and always check if params are valid
before using them.
This creates a very specific situation where we can't depend on the appmgr service existence and is able to launch any application or service.