Android Audio音量设置原理流程分析 简介 本篇文章主要介绍Android音量设置从App应用层到framework层执行流程,以及相关的细节和原理分析,建议在阅读此文章前去看博主的混音理论篇的声音的音量属性和
本篇文章主要介绍Android音量设置从App应用层到framework层执行流程,以及相关的细节和原理分析,建议在阅读此文章前去看博主的混音理论篇的声音的音量属性和声音相关公式的推导章节,这对阅读时理解音量等级、音量分贝计算有着很大的益处;如果阅读时发现文章有错误,欢迎指出!
音量设置总流程如下:
App在设置音量时,通常使用AudioManager的以下两个接口:
public static final int STREAM_DEFAULT = -1;public static final int STREAM_VOICE_CALL = 0;public static final int STREAM_SYSTEM = 1;public static final int STREAM_RING = 2;public static final int STREAM_MUSIC = 3;public static final int STREAM_ALARM = 4;public static final int STREAM_NOTIFICATION = 5;public static final int STREAM_BLUETOOTH_SCO = 6;@UnsupportedAppUsagepublic static final int STREAM_SYSTEM_ENFORCED = 7;public static final int STREAM_DTMF = 8;public static final int STREAM_TTS = 9;public static final int STREAM_ACCESSIBILITY = 10;
在分析AudioService的setStreamVolume和adjustStreamVolume函数之前,需要先掌握AudioService内部的几个知识点,这对我们分析音量设置极为重要;
mUseFixedVolume状态标志来源于:framework的config.xml内的配置字com.android.internal.R.bool.config_useFixedVolume,如果true,则所有的设备都不可进行音量调节,音量值为max值
mFixedVolumeDevices:是一个数组,里面是输出设备out device集合,在此范围内的设备不可进行音量调节,音量值取值max
所谓安全音量设备,它也是一个输出设备集合数组,包含了耳机相关输出设备;在改变音量时或插入耳机设备时,防止音量突变过大,对人耳造成伤害;如手机上music的音量设置到最大值,当我们插上耳机戴上时,如果音量仍不变保持最大输出,听力可能会受损,这时这个安全音量设备就其作用了,插入耳机时音量值变为安全音量值,保护我们的听力
音频音量别名是一个数组,在AudioService中定义,数组内部还是由streamType组成,如下:
private final int[] STREAM_VOLUME_ALIAS_DEFAULT = new int[] { AudioSystem.STREAM_VOICE_CALL, // STREAM_VOICE_CALL AudioSystem.STREAM_SYSTEM, // STREAM_SYSTEM AudioSystem.STREAM_RING, // STREAM_RING AudioSystem.STREAM_MUSIC, // STREAM_MUSIC AudioSystem.STREAM_ALARM, // STREAM_ALARM AudioSystem.STREAM_NOTIFICATION, // STREAM_NOTIFICATION AudioSystem.STREAM_BLUETOOTH_SCO, // STREAM_BLUETOOTH_SCO AudioSystem.STREAM_RING, // STREAM_SYSTEM_ENFORCED AudioSystem.STREAM_RING, // STREAM_DTMF AudioSystem.STREAM_MUSIC, // STREAM_TTS AudioSystem.STREAM_MUSIC // STREAM_ACCESSIBILITY};
如何获取某个streamType的音频音量别名呢?
数组取值,如STREAM_VOLUME_ALIAS_DEFAULT[STREAM_TTS]得到的别名是AudioSystem.STREAM_MUSIC;
那它有什么作用呢?
就是在设置音量时,将不同音频流进行分组来处理他们的音量;因为在不同设备上,不同类型音频流管理一致,如Android TV设备,STREAM_MUSIC和STREAM_RING音量管理方式是一样的
概念
它是AudioService内部的音量状态管理类,以streamType为单位,每一种音频流,都会创建一个VolumeStreamState内部对象,App应用层设置某个streamType的音量时,就会调用它对应的VolumeStreamState对象来处理音量,分析一下它的创建过程,如下:
private void createStreamStates() { // 一共11个types,定义在Audioystem中 int numStreamTypes = AudioSystem.getNumStreamTypes(); VolumeStreamState[] streams = mStreamStates = new VolumeStreamState[numStreamTypes]; //相当于mStreamStates实实在在有numStreamTypes个,但是内部的name可能是相同的 for (int i = 0; i < numStreamTypes; i++) { //mStreamVolumeAlias数组保存了每个streamType的int值,VOLUME_SETTINGS_INT数组字符串stream的名称 streams[i] = new VolumeStreamState(System.VOLUME_SETTINGS_INT[mStreamVolumeAlias[i]], i); } ......}
VolumeStreamState构造方法
private class VolumeStreamState { private final int mStreamType; private int mIndexMin; //最小音量等级 private int mIndexMax;//最大音量等级 private boolean mIsMuted; //是否静音 private String mVolumeIndexSettingName; private int mObservedDevices; //保存了每个device的音量等级index值 private final SparseIntArray mIndexMap = new SparseIntArray(8); private final Intent mVolumeChanged; private final Intent mStreamDevicesChanged; //settingName来源于Settings.java中的VOLUME_SETTINGS_INT数组,如字符串volume_music、volume_voice等 private VolumeStreamState(String settingName, int streamType) { mVolumeIndexSettingName = settingName; mStreamType = streamType; mIndexMin = MIN_STREAM_VOLUME[streamType] * 10; mIndexMax = MAX_STREAM_VOLUME[streamType] * 10; //实质是设置VolumeCurves对象的mIndexMin和mIndexMax成员 AudioSystem.initStreamVolume(streamType, mIndexMin / 10, mIndexMax / 10); readSettings(); ....... }}
<volumeGroup> <name>oem_traffic_anouncementname> <indexMin>0indexMin> <indexMax>40indexMax> <volume deviceCateGory="DEVICE_CATEGORY_SPEAKER"> <point>0,-4200point> <point>33,-2800point> <point>66,-1400point> <point>100,0point> volume>volumeGroup>
那它是怎么获取得到这个值,建议先看看我这边文章Audio解析strategy配置文件,解析strategy配置文件,得到strategy与volumeGroup数据的连接关系;从streamType获取到attribute,在从attribute获取到volumeGroup,然后就拿到volumeGroup的indexMin和indexMax了。
3. mIndexMap是map结构,key对应device,value对应音量等级index,device对应了AudioSystem.DEVICE_OUT_ALL许多设备,如DEVICE_OUT_EARPIECE、DEVICE_OUT_SPEAKER等等,也就是说每个每个VolumeStreamState保存了所有输出设备的音量等级index
VolumeStreamState之read_settings
读取配置,也就是为VolumeStreamState的部分成员赋值,具体代码如下:
public void readSettings() { synchronized (mSettingsLock) { synchronized (VolumeStreamState.class) { // force maximum volume on all streams if fixed volume property is set if (mUseFixedVolume) { //DEVICE_OUT_DEFAULT应该是默认设备的默认音量 mIndexMap.put(AudioSystem.DEVICE_OUT_DEFAULT, mIndexMax); return; } // do not read system stream volume from settings: this stream is always aliased // to another stream type and its volume is never persisted. Values in settings can // only be stale values if ((mStreamType == AudioSystem.STREAM_SYSTEM) || (mStreamType == AudioSystem.STREAM_SYSTEM_ENFORCED)) { //如果stream是系统类型,则从读取系统中此类型的默认音量,默认都是12 int index = 10 * AudioSystem.DEFAULT_STREAM_VOLUME[mStreamType]; if (mCameraSoundForced) { index = mIndexMax; } //此streamType下,无论任何输出设备,音量都是固定值 mIndexMap.put(AudioSystem.DEVICE_OUT_DEFAULT, index); return; } } }synchronized (VolumeStreamState.class) { //DEVICE_OUT_ALL是所有设备的输出或的结果,每个设备device占一个bit,如A设备为0x01 B设备0x02 C设备0x04这样依次,或之后的结果互不冲突 int remainingDevices = AudioSystem.DEVICE_OUT_ALL; for (int i = 0; remainingDevices != 0; i++) { //依次取出每一个device设备 int device = (1 << i); //如果device和remainingevices相与为0,说明这个device没有在所有设备的集合中,因此也没必要走下面的流程 if ((device & remainingDevices) == 0) { continue; } //从remainingDevices设备集合中剔除掉当前这个device remainingDevices &= ~device; // retrieve current volume for device // if no volume stored for current stream and device, use default volume if default // device, continue otherwise int defaultIndex = (device == AudioSystem.DEVICE_OUT_DEFAULT) ? AudioSystem.DEFAULT_STREAM_VOLUME[mStreamType] : -1; int index; //不是有效的名字,null或空字符 if (!hasValidSettingsName()) { index = defaultIndex; } else { //为这个的device拿到自己的字符串名字,是一个组合名字,stream类型家输出设备,如stream_music_speaker、stream_music_earpiece等 String name = getSettingNameForDevice(device); //通过ContentResolver方式获取上次保存的index index = Settings.System.getIntForUser( mContentResolver, name, defaultIndex, UserHandle.USER_CURRENT); } if (index == -1) { continue; } //getValidIndex确保index在indexMin和indexMax范围之间的合法值 mIndexMap.put(device, getValidIndex(10 * index)); } }}
最后,在AudioService中,内部的对象实例连接图如下:
protected void adjustStreamVolume(int streamType, int direction, int flags, String callingPackage, String caller, int uid) {if (mUseFixedVolume) { return;}.......boolean isMuteAdjust = isMuteAdjust(direction);if (isMuteAdjust && !isStreamAffectedByMute(streamType)) { return;}.........//获取别名int streamTypeAlias = mStreamVolumeAlias[streamType];//获取volumeStreamStateVolumeStreamState streamState = mStreamStates[streamTypeAlias];//获取stream的输出设备final int device = getDeviceForStream(streamTypeAlias);//获取此stream的这个输出设备的音量等级indexint aliasIndex = streamState.getIndex(device);boolean adjustVolume = true;int step;.......// reset any pending volume commandsynchronized (mSafeMediaVolumeStateLock) { mPendingVolumeCommand = null;}flags &= ~AudioManager.FLAG_FIXED_VOLUME;if ((streamTypeAlias == AudioSystem.STREAM_MUSIC) && ((device & mFixedVolumeDevices) != 0)) { flags |= AudioManager.FLAG_FIXED_VOLUME; //media safe安全状态是否激活,并且device又属于安全设备范围headset headphone if (mSafeMediaVolumeState == SAFE_MEDIA_VOLUME_ACTIVE && (device & mSafeMediaVolumeDevices) != 0) { //获取耳机设备的最大安全音量,防止声音突变对耳朵造成伤害 step = safeMediaVolumeIndex(device); } else { //不在safe状态或不属于safedevice,就拿它的最大音量等级 step = streamState.getMaxIndex(); } if (aliasIndex != 0) { aliasIndex = step; }} else { //又可能上层streamType和streamTypeAlias不一样,这种情况在不同设备上,比如TV设备,music、voice对应的alias都属于music step = rescaleIndex(10, streamType, streamTypeAlias);} //flag和uiSoundStreamType符合就要处理RingMode,uiSound就是获取Stream_system的别名if (((flags & AudioManager.FLAG_ALLOW_RINGER_MODES) != 0) || (streamTypeAlias == getUiSoundsStreamType())) { //获取当前ringmode模式 int ringerMode = getRingerModeInternal(); // 如果已经是vibrator震动模式,就没必要再次设置震动了 if (ringerMode == AudioManager.RINGER_MODE_VIBRATE) { flags &= ~AudioManager.FLAG_VIBRATE; } final int result = checkForRingerModeChange(aliasIndex, direction, step, streamState.mIsMuted, callingPackage, flags); adjustVolume = (result & FLAG_ADJUST_VOLUME) != 0; if ((result & AudioManager.FLAG_SHOW_SILENT_HINT) != 0) { flags |= AudioManager.FLAG_SHOW_SILENT_HINT; } // If suppressing a volume down adjustment in vibrate mode, display the UI hint if ((result & AudioManager.FLAG_SHOW_VIBRATE_HINT) != 0) { flags |= AudioManager.FLAG_SHOW_VIBRATE_HINT; }}// If the ringer mode or zen is muting the stream, do not change stream unless// it'll cause us to exit dndif (!volumeAdjustmentAllowedByDnd(streamTypeAlias, flags)) { adjustVolume = false;}int oldIndex = mStreamStates[streamType].getIndex(device);//不等于ADJUST_SAME说明声音要作出改变,如何改变就是要看他的directionif (adjustVolume && (direction != AudioManager.ADJUST_SAME)) { mAudioHandler.removeMessages(MSG_UNMUTE_STREAM); if (isMuteAdjust) { boolean state; //就是与之前的mute取反向操作 if (direction == AudioManager.ADJUST_TOGGLE_MUTE) { state = !streamState.mIsMuted; } else { state = direction == AudioManager.ADJUST_MUTE; } if (streamTypeAlias == AudioSystem.STREAM_MUSIC) { //设置是否mute setSystemAudioMute(state); } for (int stream = 0; stream < mStreamStates.length; stream++) { //满足if条件的可能有多个,因为在创建之处的时候就创建了多个 if (streamTypeAlias == mStreamVolumeAlias[stream]) { if (!(readCameraSoundForced()&& (mStreamStates[stream].getStreamType() == AudioSystem.STREAM_SYSTEM_ENFORCED))) { //设置某个streamType的mute状态 mStreamStates[stream].mute(state); } } } } else if ((direction == AudioManager.ADJUST_RAISE) && !checkSafeMediaVolume(streamTypeAlias, aliasIndex + step, device)) { Log.e(TAG, "adjustStreamVolume() safe volume index = " + oldIndex); mVolumeController.postDisplaySafeVolumeWarning(flags); } else if (((device & mFullVolumeDevices) == 0) && (streamState.adjustIndex(direction * step, device, caller) || streamState.mIsMuted)) { // Post message to set system volume (it in turn will post a // message to persist). if (streamState.mIsMuted) { // Unmute the stream if it was previously muted if (direction == AudioManager.ADJUST_RAISE) { // unmute immediately for volume up streamState.mute(false); } else if (direction == AudioManager.ADJUST_LOWER) { if (mIsSingleVolume) { sendMsg(mAudioHandler, MSG_UNMUTE_STREAM, SENDMSG_QUEUE,streamTypeAlias, flags, null, UNMUTE_STREAM_DELAY); } } } //发送mute sendMsg(mAudioHandler, MSG_SET_DEVICE_VOLUME, SENDMSG_QUEUE, device, 0, streamState, 0); } ......}
代码中有注释,不太明白的看注释!
2: rescaleIndex音量等级转换
顾名思义,就是将音量等级index从一个stream类型转换到另一个stream类型
private int rescaleIndex(int index, int srcStream, int dstStream) { int srcRange = mStreamStates[srcStream].getMaxIndex() - mStreamStates[srcStream].getMinIndex(); int dstRange = mStreamStates[dstStream].getMaxIndex() - mStreamStates[dstStream].getMinIndex(); if (srcRange == 0) { Log.e(TAG, "rescaleIndex : index range should not be zero"); return mStreamStates[dstStream].getMinIndex(); } return mStreamStates[dstStream].getMinIndex() + ((index - mStreamStates[srcStream].getMinIndex()) * dstRange + srcRange / 2) / srcRange; }
以上代码关键理解点在于return这句,srcRange代表srcStream可调整的音量等级范围,假如[1, 10],同理dstRange,有可能是[10, 20],如传入的srcStream的index音量等级是5,对应到dstStream是都少呢?
因为这里所属音量等级范畴,属于线性的,还没到分贝非线性范畴,所以是可以用等比例法进行换算,为了简化字符,将mStreamStates[dstStream].getMinIndex()以 d s t . m i n dst.min dst.min,将return返回值记为 x x x,则以上return表达式变为:
x=dst.min+ ( i n d e x − s r c . m i n ) ∗ d s t R a n g e + s r c R a n g e 2 s r c R a n g e x = dst.min + \frac{(index - src.min)*dstRange + \frac{srcRange}{2}}{srcRange} x=dst.min+srcRange(index−src.min)∗dstRange+2srcRange
等式变化:
x − d s t . m i n − 0.5 i n d e x − s r c . m i n = d s t R a n g e s r c R a n g e \frac{x - dst.min - 0.5}{index - src.min} = \frac{dstRange}{srcRange} index−src.minx−dst.min−0.5=srcRangedstRange
最终求值的 x x x不就是我们要的等比例值吗?+0.5估计是为了向上取整
4: 处理音量调节加减等工作
这个步骤主要有三个if条件处理,如下:
if (isMuteAdjust) { 处理mute和unmute情况}else if((direction == AudioManager.ADJUST_RAISE) && !checkSafeMediaVolume(streamTypeAlias, aliasIndex + step, device)){ 忽略此步}else if(((device & mFullVolumeDevices) == 0) && (streamState.adjustIndex(direction * step, device, caller) || streamState.mIsMuted)){ 处理音量加减情况}
private boolean isMuteAdjust(int adjust) { return adjust == AudioManager.ADJUST_MUTE || adjust == AudioManager.ADJUST_UNMUTE || adjust == AudioManager.ADJUST_TOGGLE_MUTE;}
AudioManager.ADJUST_MUTE : 静默
AudioManager.ADJUST_UNMUTE : 取消静默
AudioManager.ADJUST_TOGGLE_MUTE : 如果之前是静默这次就非静默,反之亦是
此类情况的处理策略:
//就是与之前的mute取反向操作if (direction == AudioManager.ADJUST_TOGGLE_MUTE) { state = !streamState.mIsMuted;} else { state = direction == AudioManager.ADJUST_MUTE;}if (streamTypeAlias == AudioSystem.STREAM_MUSIC) { //设置是否mute setSystemAudioMute(state);}for (int stream = 0; stream < mStreamStates.length; stream++) { //以别名为准,streamType对应的别名相同的音量管理类volumeStreamState都要处理 if (streamTypeAlias == mStreamVolumeAlias[stream]) { //没有强制或type情况 if (!(readCameraSoundForced() && (mStreamStates[stream].getStreamType() == AudioSystem.STREAM_SYSTEM_ENFORCED))) { //设置某个streamType的mute状态 mStreamStates[stream].mute(state); } }}
根据direction确定是mute还是非mute,最后要调用alias别名相同的VolumeStreamState对象mute方法处理:
public void mute(boolean state) { boolean changed = false; synchronized (VolumeStreamState.class) { //需要设置的state和之前的mISMuted是否相同,相同就没必要再次重复 if (state != mIsMuted) { changed = true; mIsMuted = state; //sendMsg参数MSG_SET_ALL_VOLUMES=what,SENDMSG_QUEUE代表这个消息是发、不发、替换策略,不会放入到消息体中,第三个就会放到消息里面去 sendMsg(mAudioHandler, MSG_SET_ALL_VOLUMES, SENDMSG_QUEUE, 0, 0, this, 0); } } ......}
音量调整几乎是按照此模式来执行的,拿到streamType的音量管理类VolumeStreamState,发送handle消息到AudioHandler,最后在走回VolumeStreamState自身的音量方法处理,所以VolumeStreamState封装了音量处理的大部分方法
MSG_SET_ALL_VOLUMES消息处理依次调用:setAllVolumes --> applyAllVolumes,直接看最后的方法:
public void applyAllVolumes() { final boolean isAvrcpAbsVolSupported = mDeviceBroker.isAvrcpAbsoluteVolumeSupported(); synchronized (VolumeStreamState.class) { // apply device specific volumes first int index; for (int i = 0; i < mIndexMap.size(); i++) { final int device = mIndexMap.keyAt(i); if (device != AudioSystem.DEVICE_OUT_DEFAULT) { if (mIsMuted) { index = 0; //为什么index要加5个等级呢?调整音量 } else if ((device & AudioSystem.DEVICE_OUT_ALL_A2DP) != 0 && isAvrcpAbsVolSupported) { index = getAbsoluteVolumeIndex((getIndex(device) + 5)/10); } else if ((device & mFullVolumeDevices) != 0) { index = (mIndexMax + 5)/10; } else if ((device & AudioSystem.DEVICE_OUT_HEARING_AID) != 0) { index = (mIndexMax + 5)/10; } else { index = (mIndexMap.valueAt(i) + 5)/10; } setStreamVolumeIndex(index, device); } } // apply default volume last: by convention , default device volume will be used // by audio policy manager if no explicit volume is present for a given device type if (mIsMuted) { index = 0; } else { index = (getIndex(AudioSystem.DEVICE_OUT_DEFAULT) + 5)/10; } setStreamVolumeIndex(index, AudioSystem.DEVICE_OUT_DEFAULT); }}
i. 因为这里处理的mute和unmute,所以针对的所有的设备,故会遍历mIndexMap中的所有device,依次为其调整音量;如果是mute,index就是0;unmute则为之前保存的音量加5除10,不清楚为何要加5? 除10好理解因为VolumeStreamState内的index整体都扩大10倍了,setStreamVolumeIndex向下底层设置音量时要缩小10倍ii. 不论怎么处理,最后都是调用setStreamVolumeIndex方法,走到这个方法后,后续的方法基本和AudioService不会有交集了,所以这里暂不往下分析,把AudioService分析完成。
最后,这里总结就是: mute就是音量等级index设置为0,unmute就是从mIndexMap中获取之前保存的音量等级index,并加5除10,最后调用setStreamVolumeIndex向下设置音量
if (((device & mFullVolumeDevices) == 0) && (streamState.adjustIndex(direction * step, device, caller)|| streamState.mIsMuted))
direction取值有:ADJUST_LOWER(-1)、ADJUST_RAISE(1)、ADJUST_SAME(0),还有几个mute和unmute取值,但是那种情况不会进入到这里,而step又是前面10转换而来的,也就是index这里每次以10/-10向上或向下进行调整,进入adjustIndex方法:
public boolean adjustIndex(int deltaIndex, int device, String caller) { //之前的音量加上这次变化的音量10/-10 return setIndex(getIndex(device) + deltaIndex, device, caller);}public int getIndex(int device) { synchronized (VolumeStreamState.class) { int index = mIndexMap.get(device, -1); if (index == -1) { // there is always an entry for AudioSystem.DEVICE_OUT_DEFAULT index = mIndexMap.get(AudioSystem.DEVICE_OUT_DEFAULT); } return index; }}//设在音量等级public boolean setIndex(int index, int device, String caller) { boolean changed; int oldIndex; synchronized (mSettingsLock) { synchronized (VolumeStreamState.class) { oldIndex = getIndex(device); index = getValidIndex(index); //index必须在min和max等级范围中 if ((mStreamType == AudioSystem.STREAM_SYSTEM_ENFORCED) && mCameraSoundForced) { index = mIndexMax; } //将新的音量等级index设置到map中 mIndexMap.put(device, index); changed = oldIndex != index; //音量等级是否有变化 final boolean isCurrentDevice = (device == getDeviceForStream(mStreamType)); final int numStreamTypes = AudioSystem.getNumStreamTypes(); for (int streamType = numStreamTypes - 1; streamType >= 0; streamType--) { final VolumeStreamState aliasStreamState = mStreamStates[streamType]; //跳过自己 寻找alias别名与自己相同 且也拥有此设备device的volumestate,并为其设置index if (streamType != mStreamType && mStreamVolumeAlias[streamType] == mStreamType && (changed || !aliasStreamState.hasIndexForDevice(device))) { final int scaledIndex = rescaleIndex(index, mStreamType, streamType); //mStreamStates数组中,其他不同流类型streamType,但是映射到alias里面流相同的设置相同的device一下, aliasStreamState.setIndex(scaledIndex, device, caller); if (isCurrentDevice) { aliasStreamState.setIndex(scaledIndex, getDeviceForStream(streamType), caller); } } } // Mirror changes in SPEAKER ringtone volume on SCO when if (changed && mStreamType == AudioSystem.STREAM_RING && device == AudioSystem.DEVICE_OUT_SPEAKER) { for (int i = 0; i < mIndexMap.size(); i++) { int otherDevice = mIndexMap.keyAt(i); //为什么要把sco相关的device音量也修改了呢? if ((otherDevice & AudioSystem.DEVICE_OUT_ALL_SCO) != 0) { mIndexMap.put(otherDevice, index); } } } } } return change;}
细节看代码中的注释,最主要的是把新的音量等级设置到indexMap集合中,同时再遍历所有VolumeStreamState对象,别名和自己相同的对象,也要为其设置音量等级,为什么呢?
我的理解,因为相同streamType,他们的输出device可能也是一样的,所以它的音量也应该要改变才对;
后续的第三个条件中的处理,也是根据direction判断是raise还是lower音量,然后发送sendMsg对应的消息,进行处理,同条件if一是一样的,最后也是调用setStreamVolumeIndex向下设置音量。
设置过程同adjustStreamVolume相差无几,在此就略过,如果需要可以找我要一份相关添加注释的代码,里面有详细的批注解答!
1)利用设置的音频流类型streamType找到对应的音量状态管理类VolumeStreamState,此类集合的音量管理的核心方法,内部有一个map集合保存了所有输出设备及其对应的音量等级index,最终也是调整这个index
2)同时使用streamType根据strategy找到对应的输出设备,为此设备device设置index
3)一旦某个VolumeStreamState音量做出了调整,则需要把别名相同的另一个VolumeStreamState也要进行音量调整
4)其他就是要处理RingMode、fix固定音量和safe安全音量的问题
5)音量等级设置好后,就是向下调用setStreamVolumeIndex方法,进行音量等级到分贝的处理
记住,上面AudioService最后向下调用setStreamVolumeIndex函数传递的重要参数是:
1)输出设备device; 2)音量等级index;
分析AudioPolicyManager模块的音量设置之前建议先阅读Audio解析strategy配置文件文章,因为这块涉及了很多strategy策略、VolumeCurves音量曲线概念,以及他们之间的依赖连接关系;
status_t AudioPolicyManager::setStreamVolumeIndex(audio_stream_type_t stream, int index, audio_devices_t device){ auto attributes = mEngine->getAttributesForStreamType(stream); ALOGV("%s: stream %s attributes=%s", __func__, toString(stream).c_str(), toString(attributes).c_str()); return setVolumeIndexForAttributes(attributes, index, device);}
这块代码主要是将streamType转换到他对应的属性attribute,然后调用setVolumeIndexForAttributes函数继续往下设置,那streamType怎么转换得到的attribute呢?
答案就是系统启动时解析strategy和VolumeCurves配置文件得到的strategy和VolumeCurves的数据依赖关系,利用这个依赖关系得到的:
streamType–>productStrategy–>遍历其内部所有的attribute,若它内部的streamType与传入的streamType相等就返回attribute
还记得AudioOutputDescriptor类吗?在AudioPolicyManager初始化时每打开一个输出通道,就会用mOutputs
using RoutingActivities = std::map<product_strategy_t, RoutingActivity>;using VolumeActivities = std::map<VolumeSource, VolumeActivity>;class AudioOutputDescriptor: public AudioPortConfig, public AudioIODescriptorInterface //ClientMapHandler是std::map>类型,通过addClient往里面添加,那这个client就 //代表客户端应用层的AudioTrack , public ClientMapHandler<TrackClientDescriptor>{ RoutingActivities mRoutingActivities; //std::map类型,mVolumeActivities[1]会自动构建一个class对象,无需add或insert VolumeActivities mVolumeActivities; // The ActiveClients shows the clients that contribute to the @VolumeSource counts // and may include upstream clients from a duplicating thread. // Compare with the ClientMap (mClients) which are external AudioTrack clients of the // output descriptor (and do not count internal PatchTracks). TrackClientVector MactiveClients;}
因为多个App应用端可能在中间层都是使用的同一个通道,所以这个AudioOutputDescriptor就集成ClientMapHandler这个集合对象,父类ClientMapHandler内有一个ActiveClients成员是保存客户端client信息的,同样在AudioOutputDescriptor也有一个mActiveClients也是保存客户端信息的,他们二者有何不同,我也不是太清楚,哈哈!
mVolumeActivities: 是保存与客户端音量方面的对象,音量加减时会用到
mRoutingActivities: 保存与客户端路由方面的对象
client是如何被添加进来的?
如下代码:
void AudioOutputDescriptor::setClientActive(const sp<TrackClientDescriptor>& client, bool active){ //查找成功则返回一个指向指定元素的迭代器,查找失败则返回end迭代器 auto clientIter = std::find(begin(mActiveClients), end(mActiveClients), client); //首先判断是否存在,不存在情况括号内部是false,如果参数active是true,说明是第一次进来,反之参数是false,则没必要继续往下 //同理,后半部分为true,就是存在情况下,同理 if (active == (clientIter != end(mActiveClients))) { ALOGW("%s(%s): ignored active: %d, current stream count %d", __func__, client->toShortString().c_str(), active, mRoutingActivities.at(client->strategy()).getActivityCount()); return; } if (active) { //尾巴添加 mActiveClients.push_back(client); } else { //删除 mActiveClients.erase(clientIter); } const int delta = active ? 1 : -1; // []这种方式相当于直接创建一个内部元素 delta是存活次数 mRoutingActivities[client->strategy()].changeActivityCount(delta); mVolumeActivities[client->volumeSource()].changeActivityCount(delta); // Handle non-client-specific activity ref count int32_t oldGlobalActiveCount = mGlobalActiveCount; if (!active && mGlobalActiveCount < 1) { ALOGW("%s(%s): invalid deactivation with globalRefCount %d", __func__, client->toShortString().c_str(), mGlobalActiveCount); mGlobalActiveCount = 1; } mGlobalActiveCount += delta; ...... client->setActive(active);}
如果客户端App使用AudioTrack时播放音频时,就会为其寻找一个输出通道,成功找到通道后就会调用此方法;
status_t AudioPolicyManager::setVolumeIndexForAttributes(const audio_attributes_t &attributes, int index, audio_devices_t device){ // 从attributes中获取其连接的volumeGroup auto group = mEngine->getVolumeGroupForAttributes(attributes); if (group == VOLUME_GROUP_NONE) { ALOGD("%s: no group matching with %s", __FUNCTION__, toString(attributes).c_str()); return BAD_VALUE; } status_t status = NO_ERROR; //从attribute获取与之对应的volumeGroup,在从volumeGroup中获取到VolumeCurves IVolumeCurves &curves = getVolumeCurves(attributes); //static强转,子类向基类转换安全,但反之没有运行时检查,不安全 VolumeSource vs = toVolumeSource(group); product_strategy_t strategy = mEngine->getProductStrategyForAttributes(attributes); //将当前的音量等级设置到volumeCurves的mIndexCur成员中去 status = setVolumeCurveIndex(index, device, curves); if (status != NO_ERROR) { ALOGE("%s failed to set curve index for group %d device 0x%X", __func__, group, device); return status; } audio_devices_t curSrcDevice; auto curCurvAttrs = curves.getAttributes(); //取attrs的首节点,并且根据attribute获取当前系统最新的一个输出设备curSrcDevice if (!curCurvAttrs.empty() && curCurvAttrs.front() != defaultAttr) { auto attr = curCurvAttrs.front(); //获取attr适合的device curSrcDevice = mEngine->getOutputDevicesForAttributes(attr, nullptr, false).types(); } else if (!curves.getStreamTypes().empty()) { auto stream = curves.getStreamTypes().front(); curSrcDevice = mEngine->getOutputDevicesForStream(stream, false).types(); } else { ALOGE("%s: Invalid src %d: no valid attributes nor stream",__func__, vs); return BAD_VALUE; } //多个设备的情况下,选择一个 curSrcDevice = Volume::getDeviceForVolume(curSrcDevice); //遍历每一个输出通道 for (size_t i = 0; i < mOutputs.size(); i++) { sp<SwAudioOutputDescriptor> desc = mOutputs.valueAt(i); //打开的设备 audio_devices_t curDevice = desc->devices().types(); if (curDevice & AUDIO_DEVICE_OUT_SPEAKER_SAFE) { curDevice |= AUDIO_DEVICE_OUT_SPEAKER; curDevice &= ~AUDIO_DEVICE_OUT_SPEAKER_SAFE; } bool applyVolume = false; //此设备能进行gain增益控制,检查desc的openDevice的device,查看device的配置标签gain对应的AudioGains是否支持音量调节 if (desc->useHwGain()) { //必须有对应的volumegroup客户端client存活 if (!(desc->isActive(toVolumeSource(group)) || isInCall())) { continue; } for (const auto &productStrategy : mEngine->getOrderedProductStrategies()) { auto activeClients = desc->clientsList(true , productStrategy, false ); //无存活客户端,不做音量调整 if (activeClients.empty()) { continue; } bool isPreempted = false; bool isHigherPriority = productStrategy < strategy; for (const auto &client : activeClients) { if (isHigherPriority && (client->volumeSource() != vs)) { applyVolume = false; isPreempted = true; break; } // However, continue for loop to ensure no higher prio clients running on output //client有一个符合音量曲线的设备 if (client->volumeSource() == vs) { applyVolume = true; } } if (isPreempted || applyVolume) { break; } } if (!applyVolume) { continue; // next output } //最终会把index由音量等级转化为分贝db,把分贝值设置到AudioutputDescriptor的VolumeActivities成员变量中去 status_t volStatus = checkAndSetVolume(curves, vs, index, desc, curDevice, (vs == toVolumeSource(AUDIO_STREAM_SYSTEM)?TOUCH_SOUND_FIXED_DELAY_MS : 0)); if (volStatus != NO_ERROR) { status = volStatus; } continue; } ...... if (device != AUDIO_DEVICE_OUT_DEFAULT_FOR_VOLUME) { curSrcDevice |= device; //合并当前设备与上层传入设备 //getDeviceForVolume是选择一个设备,与后者取交集,有交集说明和调节音量设备有关系 applyVolume = (Volume::getDeviceForVolume(curDevice) & curSrcDevice) != 0; } else { //检查curves内mIndexCur是否包含curSrcDevice applyVolume = !curves.hasVolumeIndexForDevice(curSrcDevice); } if (applyVolume) { //FIXME: workaround for truncated touch sounds // delayed volume change for system stream to be removed when the problem is // handled by system UI status_t volStatus = checkAndSetVolume( curves, vs, index, desc, curDevice, ((vs == toVolumeSource(AUDIO_STREAM_SYSTEM))? TOUCH_SOUND_FIXED_DELAY_MS : 0)); if (volStatus != NO_ERROR) { status = volStatus; } } } //group是attribute绑定的group,也就是streamType对应的attribute mpClientInterface->onAudioVolumeGroupChanged(group, 0 ); return status;}
代码很多,阅读下来就做了几件事:
status_t AudioPolicyManager::checkAndSetVolume(IVolumeCurves &curves, VolumeSource volumeSource, int index, const sp<AudioOutputDescriptor>& outputDesc, audio_devices_t device, int delayMs, bool force){ // 查看其内的VolumeActivities的muteCount是否大于0,大于0不能进行音量调节 if (outputDesc->isMuted(volumeSource)) { ALOGVV("%s: volume source %d muted count %d active=%d", __func__, volumeSource, outputDesc->getMuteCount(volumeSource), outputDesc->isActive(volumeSource)); return NO_ERROR; } ........ if (device == AUDIO_DEVICE_NONE) { //devices返回的是open时传入的设备 device = outputDesc->devices().types(); } //计算音量值dB,device转化为device_category,根据它在找到curve,传入index获取其分贝值 float volumeDb = computeVolume(curves, volumeSource, index, device); if (outputDesc->isFixedVolume(device) || // Force VoIP volume to max for bluetooth SCO ((isVoiceVolSrc || isBtScoVolSrc) && (device & AUDIO_DEVICE_OUT_ALL_SCO) != 0)) { volumeDb = 0.0f; } //设置到其内部成员volumeActivities里面去 outputDesc->setVolume(volumeDb, volumeSource, curves.getStreamTypes(), device, delayMs, force); if (isVoiceVolSrc || isBtScoVolSrc) { float voiceVolume; // Force voice volume to max or mute for Bluetooth SCO as other attenuations are managed by the headset if (isVoiceVolSrc) { voiceVolume = (float)index/(float)curves.getVolumeIndexMax(); } else { voiceVolume = index == 0 ? 0.0 : 1.0; } if (voiceVolume != mLastVoiceVolume) { mpClientInterface->setVoiceVolume(voiceVolume, delayMs); mLastVoiceVolume = voiceVolume; } } return NO_ERROR;}
以上代码主要完成两个任务:
1. index转dB过程依次调用
AudioPolicyManager::computeVolume -->> VolumeCurves::volIndexToDb(device_category deviceCat, int indexInUi) -->> VolumeCurve::VolIndexToDb(int indexInUi, int volIndexMin, int volIndexMax)
在VolumeCurves里面,是按map
//indexUi是上层传递下来的音量等级,max和min是可调整的最大最小值float VolumeCurve::volIndexToDb(int indexInUi, int volIndexMin, int volIndexMax) const{ ALOG_ASSERT(!mCurvePoints.isEmpty(), "Invalid volume curve"); if (volIndexMin < 0 || volIndexMax < 0) { // In order to let AudioService initialize the min and max, convention is to use -1 return NAN; } if (indexInUi < volIndexMin) { // an index of 0 means mute request when volIndexMin > 0 if (indexInUi == 0) { ALOGV("VOLUME forcing mute for index 0 with min index %d", volIndexMin); return VOLUME_MIN_DB; } ALOGV("VOLUME remapping index from %d to min index %d", indexInUi, volIndexMin); indexInUi = volIndexMin; } else if (indexInUi > volIndexMax) { ALOGV("VOLUME remapping index from %d to max index %d", indexInUi, volIndexMax); indexInUi = volIndexMax; } size_t nbCurvePoints = mCurvePoints.size(); // the volume index in the UI is relative to the min and max volume indices for this stream //nbSteps总的音量等级范围总共值 1~nbSteps,加一就是最小值从1开始,防止为0时是mute,特别注意,这个去就是index,不要理解成健值对的value了 int nbSteps = 1 + mCurvePoints[nbCurvePoints - 1].mIndex - mCurvePoints[0].mIndex; //nbSteps是总的音量等级,后者是当前设置音量index占可调整范围(min,max)占比多少,乘以后就是映射到总的音量等级的映射值 int volIdx = (nbSteps * (indexInUi - volIndexMin)) / (volIndexMax - volIndexMin); // Where would this volume index been inserted in the curve point //只比教,不插入,从而知道对应的分贝值 size_t indexInUiPosition = mCurvePoints.orderOf(CurvePoint(volIdx, 0)); if (indexInUiPosition >= nbCurvePoints) { //use last point of table return mCurvePoints[nbCurvePoints - 1].mAttenuationInMb / 100.0f; } if (indexInUiPosition == 0) { if (indexInUiPosition != mCurvePoints[0].mIndex) { return VOLUME_MIN_DB; // out of bounds } return mCurvePoints[0].mAttenuationInMb / 100.0f; } // linear interpolation in the attenuation table in dB //这个求值原理就是,取插入前一个位置的分贝值作为基础,加上后部分的比例值,比例值采用插入前和后这两个位置的间距比,计算出对应的分贝值 //思考:因为分贝是非线性的,这里采用比例换算出分贝合理吗? //我觉得应该合理,首先基础值是插入前一个位置的分贝,这个值是xml文件配置的,它是非线性的合理值,占据大部分;关键在于后部分的比例换算值是否合理,我觉得合理因为也占据小部分值,人耳听不出 float decibels = (mCurvePoints[indexInUiPosition - 1].mAttenuationInMb / 100.0f) + ((float)(volIdx - mCurvePoints[indexInUiPosition - 1].mIndex)) * ( ((mCurvePoints[indexInUiPosition].mAttenuationInMb / 100.0f) - (mCurvePoints[indexInUiPosition - 1].mAttenuationInMb / 100.0f)) / ((float)(mCurvePoints[indexInUiPosition].mIndex -mCurvePoints[indexInUiPosition - 1].mIndex)) ); ALOGV("VOLUME vol index=[%d %d %d], dB=[%.1f %.1f %.1f]", mCurvePoints[indexInUiPosition - 1].mIndex, volIdx, mCurvePoints[indexInUiPosition].mIndex, ((float)mCurvePoints[indexInUiPosition - 1].mAttenuationInMb / 100.0f), decibels, ((float)mCurvePoints[indexInUiPosition].mAttenuationInMb / 100.0f)); if(indexInUi >=0 && indexInUi<32 && (volIndexMax==31)) decibels=VOLUME_DB[indexInUi]; //分贝值 return decibels;}
这部分代码比较好理解,重点要理解mCurvePoints来源于解析volume_conf文件如下:
<volume stream="AUDIO_STREAM_DTMF" deviceCategory="DEVICE_CATEGORY_HEADSET"> <point>1,-3000point> <point>33,-2600point> <point>66,-2200point> <point>100,-1800point>volume>
mCurvePoints[x].mIndex就是第一列数据1、33、66、100,mAttenuationInMb就是分贝值-3000、-2600等,上述代码实质转换index后进行查表,因为上面传入的index范围是0 ~ 31,而mCurvePoints[x].mIndex在1 ~ 100的范围,所以要先使用等比例换将0 ~ 31范围的值换算到1~100的范围,然后使用插值法确定volIdx应该插入到mCurvePoints的哪个位置,
1)如果indexInUiPosition落在0~最大值外,则取其最接近移除的值;
2) 值落0~最大值范围内,就取前一个index位置的mAttenuationInMb,加上前后两个位置的分贝值差值,乘以这个index在前后index占的百分比,计算如下:
记前后点分贝为pre.mb、aft.mb,同理index,插入为insert.index,这个insert.index落在pre.index和aft.index中间,那所求insert.index的分贝为:
MB=pre.mb+ i n s e r t . i n d e x − p r e . i n d e x a f t . i n d e x − p r e . i n d e x ∗(aft.mb−pre.mb) MB = pre.mb + \frac{insert.index - pre.index}{aft.index - pre.index} * (aft.mb - pre.mb) MB=pre.mb+aft.index−pre.indexinsert.index−pre.index∗(aft.mb−pre.mb)
此部分作用主要是:
bool AudioOutputDescriptor::setVolume(float volumeDb, VolumeSource volumeSource, const StreamTypeVector &, audio_devices_t , uint32_t delayMs, bool force){ // We actually change the volume if: // - the float value returned by computeVolume() changed 与之前的音量不同 // - the force flag is set 强制标记 if (volumeDb != getCurVolume(volumeSource) || force) { ALOGV("%s for volumeSrc %d, volume %f, delay %d", __func__, volumeSource, volumeDb, delayMs); //设置到VolumeActivities的db成员中,本类中会getCurVolume获取当前音量,并向AudioFlinger设置到回播线程中,根据streamType设置到对应的stream上去 setCurVolume(volumeSource, volumeDb); return true; } return false;}bool SwAudioOutputDescriptor::setVolume(float volumeDb, VolumeSource vs, const StreamTypeVector &streamTypes, audio_devices_t device, uint32_t delayMs, bool force){ StreamTypeVector streams = streamTypes; if (!AudioOutputDescriptor::setVolume(volumeDb, vs, streamTypes, device, delayMs, force)) { return false; } if (streams.empty()) { streams.push_back(AUDIO_STREAM_MUSIC); } for (const auto& devicePort : devices()) { // 设备相等,且支持gain硬件调整音量的去设置 if (device == devicePort->type() && devicePort->hasGainController(true) && isActive(vs)) { ALOGV("%s: device %s has gain controller", __func__, devicePort->toString().c_str()); //将0dB转换为功率值 float volumeAmpl = Volume::DbToAmpl(0); //为此类型的软件音量值设置0就是不发声, for (const auto &stream : streams) { mClientInterface->setStreamVolume(stream, volumeAmpl, mIoHandle, delayMs); } //硬件音量更新 AudioGains gains = devicePort->getGains(); int gainMinValueInMb = gains[0]->getMinValueInMb(); int gainMaxValueInMb = gains[0]->getMaxValueInMb(); int gainStepValueInMb = gains[0]->getStepValueInMb(); int gainValueMb = ((volumeDb * 100)/ gainStepValueInMb) * gainStepValueInMb; gainValueMb = std::max(gainMinValueInMb, std::min(gainValueMb, gainMaxValueInMb)); audio_port_config config = {}; devicePort->toAudioPortConfig(&config); config.config_mask = AUDIO_PORT_CONFIG_GAIN; config.gain.values[0] = gainValueMb; //硬件音量设置 return mClientInterface->setAudioPortConfig(&config, 0) == NO_ERROR; } } //上述走过硬件音量后,下面的都是软件音量,获取当前音量并转换为功率值ampl float volumeAmpl = Volume::DbToAmpl(getCurVolume(vs)); if (hasStream(streams, AUDIO_STREAM_BLUETOOTH_SCO)) { mClientInterface->setStreamVolume(AUDIO_STREAM_VOICE_CALL, volumeAmpl, mIoHandle, delayMs); } //设置功率值 for (const auto &stream : streams) { ALOGV("%s output %d for volumeSource %d, volume %f, delay %d stream=%s", __func__, mIoHandle, vs, volumeDb, delayMs, toString(stream).c_str()); mClientInterface->setStreamVolume(stream, volumeAmpl, mIoHandle, delayMs); } return true;}
static inline float DbToAmpl(float decibels){ //VOLUME_MIN_DB = -758估计在小的话人耳就听不见了 if (decibels <= VOLUME_MIN_DB) { return 0.0f; } //exp是以e为底的指数函数,常数 e 的值约为 2.718282;是由分贝算式推导过来的db = 20* ln(P1/P0);P0是基本功率 return exp( decibels * 0.115129f); // exp( dB * ln(10) / 20 )}
看不懂此公式的先阅读混音理论篇的声音的音量属性和声音相关公式的推导章节,这里做个简单的推导:
一般来说分贝公式为 20 log P 1 P 0 20\log\frac{P_1}{P_0} 20logP0P1,其中 P1 P_1 P1是待测功率, P0 P_0 P0是基础功率,因为分贝是满足对数特性,这里就用对数比值来确定分贝,这里我们的aosp代码把 log \log log替换为对数 ln \ln ln函数,基础功率 P0 P_0 P0取值为10,在移植分贝 d e c i b e l s decibels decibels的情况下,求取 P1 P_1 P1就好理解了,简化 d e c i b e l s decibels decibels为 d b db db,则推导公式:
db=20∗ln P 1 P 0 =20ln P 1 10 db = 20* \ln\frac{P_1}{P_0} = 20\ln\frac{P_1}{10} db=20∗lnP0P1=20ln10P1
d b 20 =ln P 1 10 \frac{db}{20} = \ln\frac{P_1}{10} 20db=ln10P1
ed b 20 = P 1 10 e^{\frac{db}{20}} = \frac{P_1}{10} e20db=10P1
P 1 =10∗ ed b 20 = e ln 10 ∗ ed b 20 = eln 10 + d b 20 P_1 = 10*e^{\frac{db}{20}} = e^{\ln10} * e^{\frac{db}{20}} = e^{{\ln10} + {\frac{db}{20}}} P1=10∗e20db=eln10∗e20db=eln10+20db
所以我认为上面aosp的转换代码是错误的,为什么它会写成 ln 10 20 \frac{\ln{10}}{20} 20ln10实在想不通!!!
-----------------------------------------------------理解更新--------------------------------------------------------------
-----------------------------------------------------理解更新--------------------------------------------------------------
接上买软件音量设置方法setStreamVolume
status_t AudioFlinger::setStreamVolume(audio_stream_type_t stream, float value, audio_io_handle_t output){ // check calling permissions if (!settingsAllowed()) { return PERMISSION_DENIED; } status_t status = checkStreamType(stream); if (status != NO_ERROR) { return status; } if (output == AUDIO_IO_HANDLE_NONE) { return BAD_VALUE; } LOG_ALWAYS_FATAL_IF(stream == AUDIO_STREAM_PATCH && value != 1.0f, "AUDIO_STREAM_PATCH must have full scale volume"); AutoMutex lock(mLock); //从mPlaybackThreads集合中拿到一个回播线程实例 VolumeInterface *volumeInterface = getVolumeInterface_l(output); if (volumeInterface == NULL) { return BAD_VALUE; } //设置音量对应功率值到playbackthread中的stream对应的音量值去 volumeInterface->setStreamVolume(stream, value); return NO_ERROR;}
代码很简单了,主要是通过句柄output拿到通道回播线程PlaybackThread,设置音量进去就可以了
void AudioFlinger::PlaybackThread::setStreamVolume(audio_stream_type_t stream, float value){ Mutex::Autolock _l(mLock); mStreamTypes[stream].volume = value; broadcast_l();}
回播线程按照streamType类型集合保存了音量值,准确说应该是音量值对应的功率值,在播放音频或混音时,回播线程会将音频数据与音量数据相乘,最后将结果传送到Hal去播放,至此,软件音量调节过程就完结了!
来源地址:https://blog.csdn.net/jackzhouyu/article/details/130093058
--结束END--
本文标题: Android Audio音量设置原理流程分析
本文链接: https://lsjlt.com/news/382821.html(转载时请注明来源链接)
有问题或投稿请发送至: 邮箱/279061341@qq.com QQ/279061341
2024-03-01
2024-03-01
2024-03-01
2024-03-01
2024-03-01
2024-02-29
2024-02-29
2024-02-29
2024-02-29
2024-02-29
回答
回答
回答
回答
回答
回答
回答
回答
回答
回答
0