Repository contains audio subsystem of PurePhone system
PureOS's audio subsystem was heavily inspired by two fine pieces of software:
Implementation of module-audio was developed after going through Linux Sound Subsystem Documentation and various other documents describing audio Linux audio. These documents were mainly used as inspiration as requested PureOS audio functionality is much less sophisticated. Also due to hardware limitations (mainly processor speed and hardware audio codec features) some additional assumptions were made.
Audio device layer (which can be found in audio paths). concept of audio devices and audio module API were based on PortAudio library.
TODO in case of using bluetoothDue to limited processor speed it wasn't possible to implement dynamic re-sampling on the fly hence hardware audio codec and & BT configuration need to be switched according to specified audio file format which is currently being played.
Due to lack of re-sampling ability, channel abstraction does not exist thus it is not possible to play notification during voice call or media playback.
This format was picked as it is commonly used. Additionally moving raw PCM samples from one point to another is the easiest method and the most error resilient.
Currently due to hardware limitations audio codec only 16-bit PCM samples are supported. It is possible to add support for 8/24 or even 32 bit samples. This should be done in audio device layer inside module-bsp/bsp/audio
This is self-explanatory. Whole switching logic is implemented inside module-audio
IPhone headphones have very specific format of handling vol up/down buttons which unfortunately is not available. By design only Enter button is supported.
Audio devices that can be found in audio paths are generic abstraction over different kinds of output/input audio devices. For instance: audio codecs, GSM audio and so on.The main purpose of using this abstraction is to have one unified interface (API) for controlling and managing audio devices. This way we can also easily support different compile targets (currently Linux & RT1051). It is only a matter of implementing a new audio device.
Currently we have four implementations of audio devices
Each audio device must conform to API interface which is specified in bsp::AudioDevice class in bsp_audio
Decoders layer was developed in order to have unified API over vast range of audio decoders. Another very important part of decoders are ability to fetch audio metadata. Different kinds of decoders support different metadata, for instance MP3 decoder supports ID3 tags and so on. All of this implementation details are hidden to the higher layers. User receives metadata via unified structure describing metadata. See struct Tags.
Currently there are three implementations:
The reasons for developing encoders layer were the same as for decoders. Currently only supported encoder is WAV.
Audio module functionality is made of 4 base operations/states:
Possible states are enumered in audio::Audio::State. Each state supports various methods (defined in interface class audio::Operation) which can be invoked at any time. It is possible to send external events to current Operation and this function is realised through SendEvent method. Available events that can be passed to this method are enumrated in Operation::Event
Not all events are supported in every state. For instance, StartCallRecording and StopCallRecording are not supported in Playback operation and will be ignored upon receiving.
This is default state of audio subsystem. Audio subsystem always operates in this state when none of below is active:
Audio subsystem will also transition to this state upon following events:
Stop request no matter in what current stateIn this state audio subsystem tries to optimize power consumption by closing/removing all audio devices and limiting internal operations to minimum.
Playback operation is used when user wants to play audio file. File's extension is used to deduce which decoder to use.
Recorder operation is used when user wants to record external sound for example via internal microphone. As mentioned earlier for the time being only WAV encoder is supported.
Router operation is used in connection with GSM modem and it provides means for establishing audio voice call. Under the hood router operation uses two audio devices simultaneously configured as full-duplex(both Rx and Tx channels) and routes audio samples between them. Additionally when routing it is possible to sniff or store audio samples to external buffers/file system. This feature is currently mainly used to record voice-calls into the file.
In order to store Operation configuration a concept of Audio Profile has been introduced. List of supported audio profiles is enumerated in Profile::Type
Audio Profiles exist to complement Operations. They store audio configuration which then Operations use to configure audio devices. Each Operation supports its specific set of Audio Profiles which can be switched back and forth. To do so mechanism of sending external events is used which was described earlier. For instance sending HeadphonesPlugin event when in Playback will result in switching from current profile i.e.PlaybackLoudspeaker to PlaybackHeadphones profile. Switching profile triggers some additional internal actions:
Idle operation and report errorProfiles configurations can be found in directory: Profiles
Operations may not implement support for all possible Profile parameters i.e. inputGain and inputPath will be ignored in playback Operation.
IMPORTANT: For the time being profiles are not loaded and stored into database. This should be fixed.
NOTE: Currently some of the profiles are configurable via json files located here. The json format explanation can be found here.
IMPORTANT: Callbacks mechanism is only experimental and should be considered as incomplete.
Audio class Audio class is main interface to audio subsystem. It is exclusively used by audio-service Audio class stores internally current Operation, Profile, State, async callback, db callback etc.
AsyncCallback holds a wrapper for function that is invoked asynchronously when audio subsystem event occurs. It can be used for signalling user of audio subsystem about audio events. Possible events are listen in audio::PlaybackEventType
Audio class APIs may change State of audio subsystem.
User should be aware of following constraints:
Start from any stateStop from any statePause can be invoked only when not in Idle stateResume can be invoked only after successful Pause request.Failing to adhere will end in returning appropriate error.
Currently support for BT audio devices is marginal. There are some code parts related to BT but they absolutely cannot be treated as target implementation. Adding BT support in my opinion should be split into several steps:
module-bsp/bsp/audioPlaybackBTA2DP and SystemSoundBTA2DP)Templates for all necessary profiles are already implemented. There should be verified and double-checked, though. Jack insert/remove events are also correctly propagated into audio subsystem. What is missing is actual handling of jack detection circuit on the low-level. There is also no communication between audio subsystem and event system.
As described earlier dedicated async callback were added into each profile. It is to be considered if this mechanism is sufficient enough or should be reimplemented. Currently profiles configuration is not sustained and loaded upon start. It should be stored in DB.
Some missing features are also described in TODO