包含标签 android 的内容。
一款极简好用的直播推流和直播播放工具软件:合育宝APP安卓v1.0版2021年4月15日发布,欢迎下载免费使用
音视频 android 合育宝 浏览次数 118032 2021年4月14日,安卓版合育宝APP正式发布,欢迎下载免费使用。合育宝APP是上海释锐旗下一款极简好用的直播推流和直播播放工具软件,用手机号码注册即可推流直播,扫码即可看直播和点播视频。 读取更多关于一款极简好用的直播推流和直播播放工具软件:合育宝APP安卓v1.0版2021年4月15日发布,欢迎下载免费使用 »
新闻标签云 新闻标签云
最新论坛&教育社区帖子 最新论坛&教育社区帖子
包含标签 android 的内容。
• Android 回音消除(AcousticEchoCanceler)的使用
回音消除Google 开发文档原文: /** * Acoustic Echo Canceler (AEC). * <p>Acoustic Echo Canceler (AEC) is an audio pre-processor which removes the contribution of the * signal received from the remote party from the captured audio signal. * <p>AEC is used by voice communication applications (voice chat, video conferencing, SIP calls) * where the presence of echo with significant delay in the signal received from the remote party * is highly disturbing. AEC is often used in conjunction with noise suppression (NS). * <p>An application creates an AcousticEchoCanceler object to instantiate and control an AEC * engine in the audio capture path. * <p>To attach the AcousticEchoCanceler to a particular {@link android.media.AudioRecord}, * specify the audio session ID of this AudioRecord when creating the AcousticEchoCanceler. * The audio session is retrieved by calling * {@link android.media.AudioRecord#getAudioSessionId()} on the AudioRecord instance. * <p>On some devices, an AEC can be inserted by default in the capture path by the platform * according to the {@link android.media.MediaRecorder.AudioSource} used. The application should * call AcousticEchoCanceler.getEnable() after creating the AEC to check the default AEC activation * state on a particular AudioRecord session. * <p>See {@link android.media.audiofx.AudioEffect} class for more details on * controlling audio effects. */ 场景就是在手机播放声音和声音录制同时进行,但是手机播放的声音不会被本机录制,达到了消除的效果。微信对讲的最适合不过了,但是微信的回音消除好像是不是自己弄的。 文档大致意思:创建android.media.AudioRecord 对象的时候,可以通过这个对象获取到一个audio session 的ID,获取方法getAudioSessionId(),这个ID在创建AcousticEchoCanceler的时候要用到(创建对象:AcousticEchoCanceler.create(audioSessionId)),最后播放音频时(这里是用AudioTrack播放)传入这个ID就行了。 1.首先创建AudioRecord对象,里面的参数可以自己查资料知道什么是采样率,声道,位数。 private AudioRecord mRecorder; private byte[] pcm; private int mRecorderBufferSize; /** * 初始化录音 */ public void initRecorder() { mRecorderBufferSize = AudioRecord.getMinBufferSize(8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT); pcm = new byte[320]; mRecorder = new AudioRecord(MediaRecorder.AudioSource.VOICE_COMMUNICATION, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, mRecorderBufferSize); } 2.其次,初始化AudioTrack(注意下传入的audioSessionId) private AudioRecord mRecorder; private byte[] pcm; private int mRecorderBufferSize; /** * 初始化录音 */ public void initRecorder() { mRecorderBufferSize = AudioRecord.getMinBufferSize(8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT); pcm = new byte[320]; mRecorder = new AudioRecord(MediaRecorder.AudioSource.VOICE_COMMUNICATION, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, mRecorderBufferSize); } 3.最后创建AcousticEchoCanceler private AcousticEchoCanceler acousticEchoCanceler; private void initAEC() { if (AcousticEchoCanceler.isAvailable()) { if (acousticEchoCanceler == null) { acousticEchoCanceler = AcousticEchoCanceler.create(audioSessionId); Log.d(TAG, "initAEC: ---->" + acousticEchoCanceler + "\t" + audioSessionId); if (acousticEchoCanceler == null) { Log.e(TAG, "initAEC: ----->AcousticEchoCanceler create fail."); } else { acousticEchoCanceler.setEnabled(true); } } } } 4.最后播放就可以了(OK) public void write(byte[] data) { if (mAudioTrack != null && mAudioTrack.getPlayState() == AudioTrack.PLAYSTATE_PLAYING) { mAudioTrack.write(data, 0, data.length); } } 读取更多关于Android 回音消除(AcousticEchoCanceler)的使用 »
• Android Tutorial 1
Android is an open source and Linux-based operating system for mobile devices such as smartphones and tablet computers. Android was developed by the Open Handset Alliance, led by Google, and other companies. This tutorial will teach you basic Android programming and will also take you through some advance concepts related to Android application development. Audience This tutorial has been prepared for the beginners to help them understand basic Android programming. After completing this tutorial you will find yourself at a moderate level of expertise in Android programming from where you can take yourself to next levels. Prerequisites Android programming is based on Java programming language so if you have basic understanding on Java programming then it will be a fun to learn Android application development. What is Android? Android is an open source and Linux-based Operating System for mobile devices such as smartphones and tablet computers. Android was developed by the Open Handset Alliance, led by Google, and other companies. Android offers a unified approach to application development for mobile devices which means developers need only develop for Android, and their applications should be able to run on different devices powered by Android. The first beta version of the Android Software Development Kit (SDK) was released by Google in 2007 where as the first commercial version, Android 1.0, was released in September 2008. On June 27, 2012, at the Google I/O conference, Google announced the next Android version, 4.1 Jelly Bean. Jelly Bean is an incremental update, with the primary aim of improving the user interface, both in terms of functionality and performance. The source code for Android is available under free and open source software licenses. Google publishes most of the code under the Apache License version 2.0 and the rest, Linux kernel changes, under the GNU General Public License version 2. Why Android ? Features of Android Android is a powerful operating system competing with Apple 4GS and supports great features. Few of them are listed below − Sr.No. Feature & Description 1 Beautiful UI Android OS basic screen provides a beautiful and intuitive user interface. 2 Connectivity GSM/EDGE, IDEN, CDMA, EV-DO, UMTS, Bluetooth, Wi-Fi, LTE, NFC and WiMAX. 3 Storage SQLite, a lightweight relational database, is used for data storage purposes. 4 Media support H.263, H.264, MPEG-4 SP, AMR, AMR-WB, AAC, HE-AAC, AAC 5.1, MP3, MIDI, Ogg Vorbis, WAV, JPEG, PNG, GIF, and BMP. 5 Messaging SMS and MMS 6 Web browser Based on the open-source WebKit layout engine, coupled with Chrome's V8 JavaScript engine supporting HTML5 and CSS3. 7 Multi-touch Android has native support for multi-touch which was initially made available in handsets such as the HTC Hero. 8 Multi-tasking User can jump from one task to another and same time various application can run simultaneously. 9 Resizable widgets Widgets are resizable, so users can expand them to show more content or shrink them to save space. 10 Multi-Language Supports single direction and bi-directional text. 11 GCM Google Cloud Messaging (GCM) is a service that lets developers send short message data to their users on Android devices, without needing a proprietary sync solution. 12 Wi-Fi Direct A technology that lets apps discover and pair directly, over a high-bandwidth peer-to-peer connection. 13 Android Beam A popular NFC-based technology that lets users instantly share, just by touching two NFC-enabled phones together. Android Applications Android applications are usually developed in the Java language using the Android Software Development Kit. Once developed, Android applications can be packaged easily and sold out either through a store such as Google Play, SlideME, Opera Mobile Store, Mobango, F-droid and the Amazon Appstore. Android powers hundreds of millions of mobile devices in more than 190 countries around the world. It's the largest installed base of any mobile platform and growing fast. Every day more than 1 million new Android devices are activated worldwide. This tutorial has been written with an aim to teach you how to develop and package Android application. We will start from environment setup for Android application programming and then drill down to look into various aspects of Android applications. Categories of Android applications There are many android applications in the market. The top categories are − History of Android The code names of android ranges from A to N currently, such as Aestro, Blender, Cupcake, Donut, Eclair, Froyo, Gingerbread, Honeycomb, Ice Cream Sandwitch, Jelly Bean, KitKat, Lollipop and Marshmallow. Let's understand the android history in a sequence. What is API level? API Level is an integer value that uniquely identifies the framework API revision offered by a version of the Android platform. Platform Version API Level VERSION_CODE Android 6.0 23 MARSHMALLOW Android 5.1 22 LOLLIPOP_MR1 Android 5.0 21 LOLLIPOP Android 4.4W 20 KITKAT_WATCH KitKat for Wearables Only Android 4.4 19 KITKAT Android 4.3 18 JELLY_BEAN_MR2 Android 4.2, 4.2.2 17 JELLY_BEAN_MR1 Android 4.1, 4.1.1 16 JELLY_BEAN Android 4.0.3, 4.0.4 15 ICE_CREAM_SANDWICH_MR1 Android 4.0, 4.0.1, 4.0.2 14 ICE_CREAM_SANDWICH Android 3.2 13 HONEYCOMB_MR2 Android 3.1.x 12 HONEYCOMB_MR1 Android 3.0.x 11 HONEYCOMB Android 2.3.4 Android 2.3.3 10 GINGERBREAD_MR1 Android 2.3.2 Android 2.3.1 Android 2.3 9 GINGERBREAD Android 2.2.x 8 FROYO Android 2.1.x 7 ECLAIR_MR1 Android 2.0.1 6 ECLAIR_0_1 Android 2.0 5 ECLAIR Android 1.6 4 DONUT Android 1.5 3 CUPCAKE Android 1.1 2 BASE_1_1 Android 1.0 1 BASE You will be glad to know that you can start your Android application development on either of the following operating systems − Microsoft Windows XP or later version. Mac OS X 10.5.8 or later version with Intel chip. Linux including GNU C Library 2.7 or later. Second point is that all the required tools to develop Android applications are freely available and can be downloaded from the Web. Following is the list of software's you will need before you start your Android application programming. Java JDK5 or later version Android Studio Here last two components are optional and if you are working on Windows machine then these components make your life easy while doing Java based application development. So let us have a look how to proceed to set required environment. Set-up Java Development Kit (JDK) You can download the latest version of Java JDK from Oracle's Java site − Java SE Downloads. You will find instructions for installing JDK in downloaded files, follow the given instructions to install and configure the setup. Finally set PATH and JAVA_HOME environment variables to refer to the directory that contains java and javac, typically java_install_dir/bin and java_install_dir respectively. If you are running Windows and installed the JDK in C:\jdk1.8.0_102, you would have to put the following line in your C:\autoexec.bat file. set PATH=C:\jdk1.8.0_102\bin;%PATH% set JAVA_HOME=C:\jdk1.8.0_102 Alternatively, you could also right-click on My Computer, select Properties, then Advanced, then Environment Variables. Then, you would update the PATH value and press the OK button. On Linux, if the SDK is installed in /usr/local/jdk1.8.0_102 and you use the C shell, you would put the following code into your .cshrc file. setenv PATH /usr/local/jdk1.8.0_102/bin:$PATH setenv JAVA_HOME /usr/local/jdk1.8.0_102 Alternatively, if you use Android studio, then it will know automatically where you have installed your Java. Android IDEs There are so many sophisticated Technologies are available to develop android applications, the familiar technologies, which are predominantly using tools as follows Android Studio Eclipse IDE(Deprecated) Android operating system is a stack of software components which is roughly divided into five sections and four main layers as shown below in the architecture diagram. Linux kernel At the bottom of the layers is Linux - Linux 3.6 with approximately 115 patches. This provides a level of abstraction between the device hardware and it contains all the essential hardware drivers like camera, keypad, display etc. Also, the kernel handles all the things that Linux is really good at such as networking and a vast array of device drivers, which take the pain out of interfacing to peripheral hardware. Libraries On top of Linux kernel there is a set of libraries including open-source Web browser engine WebKit, well known library libc, SQLite database which is a useful repository for storage and sharing of application data, libraries to play and record audio and video, SSL libraries responsible for Internet security etc. Android Libraries This category encompasses those Java-based libraries that are specific to Android development. Examples of libraries in this category include the application framework libraries in addition to those that facilitate user interface building, graphics drawing and database access. A summary of some key core Android libraries available to the Android developer is as follows − android.app − Provides access to the application model and is the cornerstone of all Android applications. android.content − Facilitates content access, publishing and messaging between applications and application components. android.database − Used to access data published by content providers and includes SQLite database management classes. android.opengl − A Java interface to the OpenGL ES 3D graphics rendering API. android.os − Provides applications with access to standard operating system services including messages, system services and inter-process communication. android.text − Used to render and manipulate text on a device display. android.view − The fundamental building blocks of application user interfaces. android.widget − A rich collection of pre-built user interface components such as buttons, labels, list views, layout managers, radio buttons etc. android.webkit − A set of classes intended to allow web-browsing capabilities to be built into applications. Having covered the Java-based core libraries in the Android runtime, it is now time to turn our attention to the C/C++ based libraries contained in this layer of the Android software stack. Android Runtime This is the third section of the architecture and available on the second layer from the bottom. This section provides a key component called Dalvik Virtual Machine which is a kind of Java Virtual Machine specially designed and optimized for Android. The Dalvik VM makes use of Linux core features like memory management and multi-threading, which is intrinsic in the Java language. The Dalvik VM enables every Android application to run in its own process, with its own instance of the Dalvik virtual machine. The Android runtime also provides a set of core libraries which enable Android application developers to write Android applications using standard Java programming language. Application Framework The Application Framework layer provides many higher-level services to applications in the form of Java classes. Application developers are allowed to make use of these services in their applications. The Android framework includes the following key services − Activity Manager − Controls all aspects of the application lifecycle and activity stack. Content Providers − Allows applications to publish and share data with other applications. Resource Manager − Provides access to non-code embedded resources such as strings, color settings and user interface layouts. Notifications Manager − Allows applications to display alerts and notifications to the user. View System − An extensible set of views used to create application user interfaces. Applications You will find all the Android application at the top layer. You will write your application to be installed on this layer only. Examples of such applications are Contacts Books, Browser, Games etc. Application components are the essential building blocks of an Android application. These components are loosely coupled by the application manifest file AndroidManifest.xml that describes each component of the application and how they interact. There are following four main components that can be used within an Android application − Sr.No Components & Description 1 Activities They dictate the UI and handle the user interaction to the smart phone screen. 2 Services They handle background processing associated with an application. 3 Broadcast Receivers They handle communication between Android OS and applications. 4 Content Providers They handle data and database management issues. Activities An activity represents a single screen with a user interface,in-short Activity performs actions on the screen. For example, an email application might have one activity that shows a list of new emails, another activity to compose an email, and another activity for reading emails. If an application has more than one activity, then one of them should be marked as the activity that is presented when the application is launched. An activity is implemented as a subclass of Activity class as follows − public class MainActivity extends Activity { } Services A service is a component that runs in the background to perform long-running operations. For example, a service might play music in the background while the user is in a different application, or it might fetch data over the network without blocking user interaction with an activity. A service is implemented as a subclass of Service class as follows − public class MyService extends Service { } Broadcast Receivers Broadcast Receivers simply respond to broadcast messages from other applications or from the system. For example, applications can also initiate broadcasts to let other applications know that some data has been downloaded to the device and is available for them to use, so this is broadcast receiver who will intercept this communication and will initiate appropriate action. A broadcast receiver is implemented as a subclass of BroadcastReceiver class and each message is broadcaster as an Intent object. public class MyReceiver extends BroadcastReceiver { public void onReceive(context,intent){} } Content Providers A content provider component supplies data from one application to others on request. Such requests are handled by the methods of the ContentResolver class. The data may be stored in the file system, the database or somewhere else entirely. A content provider is implemented as a subclass of ContentProvider class and must implement a standard set of APIs that enable other applications to perform transactions. public class MyContentProvider extends ContentProvider { public void onCreate(){} } We will go through these tags in detail while covering application components in individual chapters. Additional Components There are additional components which will be used in the construction of above mentioned entities, their logic, and wiring between them. These components are − S.No Components & Description 1 Fragments Represents a portion of user interface in an Activity. 2 Views UI elements that are drawn on-screen including buttons, lists forms etc. 3 Layouts View hierarchies that control screen format and appearance of the views. 4 Intents Messages wiring components together. 5 Resources External elements, such as strings, constants and drawable pictures. 6 Manifest Configuration file for the application. Let us start actual programming with Android Framework. Before you start writing your first example using Android SDK, you have to make sure that you have set-up your Android development environment properly as explained in Android - Environment Set-up tutorial. I also assume that you have a little bit working knowledge with Android studio. So let us proceed to write a simple Android Application which will print "Hello World!". Create Android Application The first step is to create a simple Android Application using Android studio. When you click on Android studio icon, it will show screen as shown below You can start your application development by calling start a new android studio project. in a new installation frame should ask Application name, package information and location of the project.− After entered application name, it going to be called select the form factors your application runs on, here need to specify Minimum SDK, in our tutorial, I have declared as API23: Android 6.0(Mashmallow) − The next level of installation should contain selecting the activity to mobile, it specifies the default layout for Applications. At the final stage it going to be open development tool to write the application code. Anatomy of Android Application Before you run your app, you should be aware of a few directories and files in the Android project − Sr.No. Folder, File & Description 1 Java This contains the .java source files for your project. By default, it includes an MainActivity.java source file having an activity class that runs when your app is launched using the app icon. 2 res/drawable-hdpi This is a directory for drawable objects that are designed for high-density screens. 3 res/layout This is a directory for files that define your app's user interface. 4 res/values This is a directory for other various XML files that contain a collection of resources, such as strings and colours definitions. 5 AndroidManifest.xml This is the manifest file which describes the fundamental characteristics of the app and defines each of its components. 6 Build.gradle This is an auto generated file which contains compileSdkVersion, buildToolsVersion, applicationId, minSdkVersion, targetSdkVersion, versionCode and versionName Following section will give a brief overview of the important application files. The Main Activity File The main activity code is a Java file MainActivity.java. This is the actual application file which ultimately gets converted to a Dalvik executable and runs your application. Following is the default code generated by the application wizard for Hello World! application − package com.example.helloworld; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } } Here, R.layout.activity_main refers to the activity_main.xml file located in the res/layout folder. The onCreate() method is one of many methods that are figured when an activity is loaded. The Manifest File Whatever component you develop as a part of your application, you must declare all its components in a manifest.xml which resides at the root of the application project directory. This file works as an interface between Android OS and your application, so if you do not declare your component in this file, then it will not be considered by the OS. For example, a default manifest file will look like as following file − <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.tutorialspoint7.myapplication"> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Here <application>...</application> tags enclosed the components related to the application. Attribute android:icon will point to the application icon available under res/drawable-hdpi. The application uses the image named ic_launcher.png located in the drawable folders The <activity> tag is used to specify an activity and android:name attribute specifies the fully qualified class name of the Activity subclass and the android:label attributes specifies a string to use as the label for the activity. You can specify multiple activities using <activity> tags. The action for the intent filter is named android.intent.action.MAIN to indicate that this activity serves as the entry point for the application. The category for the intent-filter is named android.intent.category.LAUNCHER to indicate that the application can be launched from the device's launcher icon. The @string refers to the strings.xml file explained below. Hence, @string/app_name refers to the app_name string defined in the strings.xml file, which is "HelloWorld". Similar way, other strings get populated in the application. Following is the list of tags which you will use in your manifest file to specify different Android application components − <activity>elements for activities <service> elements for services <receiver> elements for broadcast receivers <provider> elements for content providers The Strings File The strings.xml file is located in the res/values folder and it contains all the text that your application uses. For example, the names of buttons, labels, default text, and similar types of strings go into this file. This file is responsible for their textual content. For example, a default strings file will look like as following file − <resources> <string name="app_name">HelloWorld</string> <string name="hello_world">Hello world!</string> <string name="menu_settings">Settings</string> <string name="title_activity_main">MainActivity</string> </resources> The Layout File The activity_main.xml is a layout file available in res/layout directory, that is referenced by your application when building its interface. You will modify this file very frequently to change the layout of your application. For your "Hello World!" application, this file will have following content related to default layout − <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" > <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerHorizontal="true" android:layout_centerVertical="true" android:padding="@dimen/padding_medium" android:text="@string/hello_world" tools:context=".MainActivity" /> </RelativeLayout> This is an example of simple RelativeLayout which we will study in a separate chapter. The TextView is an Android control used to build the GUI and it have various attributes like android:layout_width, android:layout_height etc which are being used to set its width and height etc.. The @string refers to the strings.xml file located in the res/values folder. Hence, @string/hello_world refers to the hello string defined in the strings.xml file, which is "Hello World!". Running the Application Let's try to run our Hello World! application we just created. I assume you had created your AVD while doing environment set-up. To run the app from Android studio, open one of your project's activity files and click Run icon from the tool bar. Android studio installs the app on your AVD and starts it and if everything is fine with your set-up and application, it will display following Emulator window − Congratulations!!! you have developed your first Android Application and now just keep following rest of the tutorial step by step to become a great Android Developer. All the very best. (未完,待续) 读取更多关于Android Tutorial 1 »
• 如何 下载并安装合育宝APP?
如何 下载并安装合育宝APP? 从合育宝官网下载(有效); 从下列安卓应用市场:2020年10月1日起陆续发布到下列应用商店 华为 应用市场 搜索 合育宝 安装; vivo 应用商店 搜索 合育宝 安装; oppo 应用商店 搜索 合育宝 安装; 小米 应用商店 搜索 合育宝 安装; 腾讯 应用宝 搜索 合育宝 安装; 360 应用商店 搜索 合育宝 安装; 百度 应用商店 搜索 合育宝 安装。 读取更多关于如何 下载并安装合育宝APP? »
• 合育宝APP简介
合育宝应用程序是一款流媒体工具软件,支持在Android手机和平板设备上运行,借助这个APP您可以随时随地通过手机开展直播和录播活动,录播视频自动发布到教育资源公共服务平台上,可用于学生自学、教师评价、备课、教研和听课评课等各种日常教学场景。 核心功能 现场直播; 录播视频(支持字幕); 网络播放器. 如何播放网络视频 (直播流或者点播流) ? 进入 播放器 界面; 手工输入或者通过左侧的二维码扫码输入要观看的流媒体网址; 点击播放按钮开始播放. 如何直播 ? 点击屏幕右上角的 头像 图标,进入 我的 界面; 在 登录 栏内用手机免费注册一个账号,并登录; 在 账户与直播 栏内 一次性设置好 直播服务器、直播应用、推流密码和拉流密码四项参数,这些参数只需要设置一次; 在 账户与直播 栏内 填写直播预告内容,包括录播批次,直播预告标题,预告内容介绍,预计开始时间和预计结束时间六项参数,这些参数需要在每次直播前按照需要更改; 完成上述两项参数设置后,点击进入 直播台; 点击直播台底部的直播按钮开始直播. 录播文件在哪里 ? 点击 我的 界面中的 录播 栏目; 点击 标题 开始播放单个视频; 点击 批次 开始播放整批次视频; 点击 右箭头 编辑视频说明文本或者更改视频批次; 点击 分享 图标可以将视频网址通过其它社交软件分享给朋友; 点击 删除 图标可以删除该视频文件; 点击 [字幕] 给视频添加字幕(用本APP账号在合育宝网站登录). 读取更多关于合育宝APP简介 »
• Android国际化语言代码
中文(中国):values-zh-rCN 中文(台湾):values-zh-rTW 中文(香港):values-zh-rHK 英语(美国):values-en-rUS 英语(英国):values-en-rGB 英文(澳大利亚):values-en-rAU 英文(加拿大):values-en-rCA 英文(爱尔兰):values-en-rIE 英文(印度):values-en-rIN 英文(新西兰):values-en-rNZ 英文(新加坡):values-en-rSG 英文(南非):values-en-rZA 阿拉伯文(埃及):values-ar-rEG 阿拉伯文(以色列):values-ar-rIL 保加利亚文: values-bg-rBG 加泰罗尼亚文:values-ca-rES 捷克文:values-cs-rCZ 丹麦文:values-da-rDK 德文(奥地利):values-de-rAT 德文(瑞士):values-de-rCH 德文(德国):values-de-rDE 德文(列支敦士登):values-de-rLI 希腊文:values-el-rGR 西班牙文(西班牙):values-es-rES 西班牙文(美国):values-es-rUS 芬兰文(芬兰):values-fi-rFI 法文(比利时):values-fr-rBE 法文(加拿大):values-fr-rCA 法文(瑞士):values-fr-rCH 法文(法国):values-fr-rFR 希伯来文:values-iw-rIL 印地文:values-hi-rIN 克罗里亚文:values-hr-rHR 匈牙利文:values-hu-rHU 印度尼西亚文:values-in-rID 意大利文(瑞士):values-it-rCH 意大利文(意大利):values-it-rIT 日文:values-ja-rJP 韩文:values-ko-rKR 立陶宛文:valueslt-rLT 拉脱维亚文:values-lv-rLV 挪威博克马尔文:values-nb-rNO 荷兰文(比利时):values-nl-BE 荷兰文(荷兰):values-nl-rNL 波兰文:values-pl-rPL 葡萄牙文(巴西):values-pt-rBR 葡萄牙文(葡萄牙):values-pt-rPT 罗马尼亚文:values-ro-rRO 俄文:values-ru-rRU 斯洛伐克文:values-sk-rSK 斯洛文尼亚文:values-sl-rSI 塞尔维亚文:values-sr-rRS 瑞典文:values-sv-rSE 泰文:values-th-rTH 塔加洛语:values-tl-rPH 土耳其文:values--r-rTR 乌克兰文:values-uk-rUA 越南文:values-vi-rVN 缅甸语 : values-my(补充一种新语言) 读取更多关于Android国际化语言代码 »
• 了解 Activity 生命周期
为了在 Activity 生命周期的各个阶段之间导航转换,Activity 类提供六个核心回调:onCreate()、onStart()、onResume()、onPause()、onStop() 和 onDestroy()。当 Activity 进入新状态时,系统会调用每个回调。 实测分析 ======竖屏进入应用====== V/LiveTAG: onCreate is called V/LiveTAG: onStart is called V/LiveTAG: onResume is called ======旋转手机至横屏====== V/LiveTAG: onConfigurationChanged is called ======跳转到第二个Activity====== V/LiveTAG: onPause is called V/LiveTAG: onSaveInstanceState is called V/LiveTAG: Camera orientaion is:2 width=1280 height=720 previewWidth=1280 priviewHeight=720 V/LiveTAG: onStop is called ======通过应用内导航返回到首页====== V/LiveTAG: onDestroy is called V/LiveTAG: onCreate is called V/LiveTAG: onStart is called V/LiveTAG: onResume is called ======通过系统导航返回到首页====== V/LiveTAG: onRestart is called V/LiveTAG: onStart is called V/LiveTAG: onResume is called 读取更多关于了解 Activity 生命周期 »
• Android原生编解码接口 MediaCodec 完全解析
MediaCodec 基本介绍 MediaCodec类可用于访问Android底层的多媒体编解码器,例如,编码器/解码器组件。它是Android底层多媒体支持基础架构的一部分(通常与MediaExtractor, MediaSync, MediaMuxer, MediaCrypto, MediaDrm, Image, Surface, 以及AudioTrack一起使用)。 Android 底层多媒体模块采用的是 OpenMax 框架,任何 Android 底层编解码模块的实现,都必须遵循 OpenMax 标准。Google 官方默认提供了一系列的软件编解码器:包括:OMX.google.h264.encoder,OMX.google.h264.decoder, OMX.google.aac.encoder, OMX.google.aac.decoder 等等,而硬件编解码功能,则需要由芯片厂商依照 OpenMax 框架标准来完成,所以,一般采用不同芯片型号的手机,硬件编解码的实现和性能是不同的。 Android 应用层统一由 MediaCodec API 来提供各种音视频编解码功能,由参数配置来决定采用何种编解码算法、是否采用硬件编解码加速等。 MediaCodec的工作流程如下: 从上图可以看出 MediaCodec 架构上采用了2个缓冲区队列,异步处理数据,并且使用了一组输入输出缓存。 你请求或接收到一个空的输入缓存(input buffer),向其中填充满数据并将它传递给编解码器处理。编解码器处理完这些数据并将处理结果输出至一个空的输出缓存(output buffer)中。最终,你请求或接收到一个填充了结果数据的输出缓存(output buffer),使用完其中的数据,并将其释放给编解码器再次使用。 具体工作如下: Client 从 input 缓冲区队列申请 empty buffer [dequeueInputBuffer] Client 把需要编解码的数据拷贝到 empty buffer,然后放入 input 缓冲区队列 [queueInputBuffer] MediaCodec 模块从 input 缓冲区队列取一帧数据进行编解码处理 编解码处理结束后,MediaCodec 将原始数据 buffer 置为 empty 后放回 input 缓冲区队列,将编解码后的数据放入到 output 缓冲区队列 Client 从 output 缓冲区队列申请编解码后的 buffer [dequeueOutputBuffer] Client 对编解码后的 buffer 进行渲染/播放 渲染/播放完成后,Client 再将该 buffer 放回 output 缓冲区队列 [releaseOutputBuffer] MediaCodec的基本调用流程是: createEncoderByType/createDecoderByType configure start while(true) { dequeueInputBuffer //从输入流队列中取数据进行编码操作 getInputBuffers //获取需要编码数据的输入流队列,返回的是一个ByteBuffer数组 queueInputBuffer //输入流入队列 dequeueOutputBuffer //从输出队列中取出编码操作之后的数据 getOutPutBuffers // 获取编解码之后的数据输出流队列,返回的是一个ByteBuffer数组 releaseOutputBuffer //处理完成,释放ByteBuffer数据 } stop release 第1步.初始化MediaCodec,方法有两种,分别是通过名称和类型来创建,对应的方法为: MediaCodec createByCodecName (String name); MediaCodec createDecoderByType (String type); 第一种创建方式根据 mineType 以及是否为编码器,选择出一个 MediaCodecInfo,然后使用第一种方式初始化MediaCodec private MediaCodecInfo selectSupportCodec(String mimeType) { int numCodecs = MediaCodecList.getCodecCount(); for (int i = 0; i < numCodecs; i++) { MediaCodecInfo codecInfo = MediaCodecList.getCodecInfoAt(i); // 判断是否为编码器,否则直接进入下一次循环 if (!codecInfo.isEncoder()) { continue; } // 如果是编码器,判断是否支持Mime类型 String[] types = codecInfo.getSupportedTypes(); for (int j = 0; j < types.length; j++) { if (types[j].equalsIgnoreCase(mimeType)) { return codecInfo; } } } return null; } 第二种方式比较简单 mMediaCodec = MediaCodec.createDecoderByType (MIME_TYPE);//MIME_TYPE例如"video/avc"等。 第2步.配置编码器,设置各种编码器参数(MediaFormat),这个类包含了比特率、帧率、关键帧间隔时间等。然后再调用 mMediaCodec .configure,对于 API 19 以上的系统,我们可以选择 Surface 输入:mMediaCodec .createInputSurface。 format= MediaFormat.createVideoFormat(MIME_TYPE, width, height); format.setInteger(MediaFormat.KEY_BIT_RATE, bitrate); format.setInteger(MediaFormat.KEY_FRAME_RATE, framerate); format.setInteger(MediaFormat.KEY_COLOR_FORMAT,MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface); format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 1); //关键帧间隔时间 单位s mMediaCodec .configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE); mInputSurface = mMediaCodec.createInputSurface(); 第3步.启动(打开)编码器,获取输入输出缓冲区。 mMediaCodec.start(); mInputBuffers = mMediaCodec.getInputBuffers(); mOutputBuffers = mMediaCodec.getOutputBuffers(); 获取输入输出缓冲区在api19 上是以上方式获取,api21以后 可以使用直接获取ByteBuffer ByteBuffer intputBuffer = mMediaCodec.getInputBuffer(inputBufferIndex); ByteBuffer outputBuffer = mMediaCodec.getOutputBuffer(outputBufferIndex); 第4步.输入数据,有两种方式:“普通输入、Surface 输入”。 普通输入又可区分为两种情况,一种是配合MediaExtractor ,一种是取原数据。 获取可使用的缓冲区索引 int outputBufferIndex = mMediaCodec.dequeueInputBuffer(TIMES_OUT); 返回一个填充了有效数据的input buffer的索引,如果没有可用的buffer则返回-1,参数为超时时间(TIMES_OUT),单位是微秒,当timeoutUs==0时,该方法立即返回;当timeoutUs<0时,无限期地等待一个可用的input buffer,当timeoutUs>0时, 等待时间为传入的微秒值。 普通输入之获取原数据方式 ByteBuffer inputBuffer = mInputBuffers[inputbufferindex]; inputBuffer.clear();//清除原来的内容以接收新的内容 inputBuffer.put(bytes, 0, len);//len是传进来的有效数据长度 mMediaCodec .queueInputBuffer(inputbufferindex, 0, len, timestamp, 0); 上面输入缓存的index,通过getInputBuffers()得到的是输入缓存数组,通过index和输入缓存数组可以得到当前请求的输入缓存,在使用之前要clear一下,避免之前的缓存数据影响当前数据,接着就是把数据添加到输入缓存中,并调用queueInputBuffer(…)把缓存数据入队。 普通输入之配合MediaExtractor 解码其他的音视频数据 ByteBuffer inputBuf = decoderInputBuffers[inputBufIndex]; int chunkSize = SDecoder.extractor.readSampleData(inputBuf, 0); if (chunkSize < 0) { SDecoder.decoder.queueInputBuffer(inputBufIndex, 0, 0, 0L, MediaCodec.BUFFER_FLAG_END_OF_STREAM); } else { SDecoder.decoder.queueInputBuffer(inputBufIndex, 0, chunkSize, SDecoder.extractor.getSampleTime(), 0); SDecoder.extractor.advance(); } 使用Surface输入,Surface输入是Android 4.3(api 18)引入。但用在某些 API 18 的机型上会导致编码器输出数据量特别小,画面是黑屏,所以 Surface 输入模式从 API 19 启用比较好。 //Requests a Surface to use as the input to an encoder, in place of input buffers. This may only be //called after configure(MediaFormat, Surface, MediaCrypto, int) and before start(). //调用此方法,官方有这么一段话,意思是必须在configure之后 start()之前调用。 mInputSurface = mMediaCodec.createInputSurface(); 第5步.输出数据。通常编码传输时每个关键帧头部都需要带上编码配置数据(PPS,SPS),但 MediaCodec 会在首次输出时专门输出编码配置数据,后面的关键帧里是不携带这些数据的,所以需要我们手动做一个拼接。 获取可使用的缓冲区:获取输出缓存和获取输入缓存类似,首先通过dequeueOutputBuffer(BufferInfo info, long timeoutUs)来请求一个输出缓存,这里需要传入一个BufferInfo对象,用于存储ByteBuffer的信息,TIMES_OUT为超时时间。TIMES_OUT传的是 0,表示不会等待,由于这里并没有一个单独的线程不停调用,所以这样没什么问题,反倒可以防止阻塞,但如果我们单独起了一个线程专门取输出数据,那这就会导致 CPU 资源的浪费了,可以加上一个合适的值,例如 3~10ms。 BufferInfo mBufferInfo = new MediaCodec.BufferInfo(); int outputBufferIndex = mMediaCodec.dequeueOutputBuffer(mBufferInfo,TIMES_OUT); 获取数据 ByteBuffer outputBuffer = null; if (Build.VERSION.SDK_INT < Build.VERSION_CODES.LOLLIPOP) { outputBuffer = outputBuffers[outputBufferIndex]; } else { outputBuffer = mMediaCodec.getOutputBuffer(outputBufferIndex); } if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) { MediaFormat format = mMediaCodec.getOutputFormat(); format.setByteBuffer("csd-0",outputBuffer); mBufferInfo.size = 0; } // 如果API<=19,需要根据BufferInfo的offset偏移量调整ByteBuffer的位置 // 并且限定将要读取缓存区数据的长度,否则输出数据会混乱 if (mBufferInfo.size != 0) { if (Build.VERSION.SDK_INT <= Build.VERSION_CODES.KITKAT) { outputBuffer.position(mBufferInfo.offset); outputBuffer.limit(mBufferInfo.offset + mBufferInfo.size); } // mMuxer.writeSampleData(mTrackIndex, encodedData, bufferInfo); } 释放缓冲区 mMediaCodec.releaseOutputBuffer(outputBufferIndex,false); 第6步.使用完MediaCodec后释放资源。要告知编码器我们要结束编码,Surface 输入的话调用 mMediaCodec .signalEndOfInputStream,普通输入则可以为在 queueInputBuffer 时指定 MediaCodec.BUFFER_FLAG_END_OF_STREAM 这个 flag;告知编码器后我们就可以等到编码器输出的 buffer 带着 MediaCodec.BUFFER_FLAG_END_OF_STREAM 这个 flag 了,等到之后我们调用 mMediaCodec .release 销毁编码器。 if (mMediaCodec != null) { mMediaCodec.stop(); mMediaCodec.release(); mMediaCodec = null; } MediaCodec 流控 流控就是流量控制。为什么要控制,就是为了在一定的限制条件下,收益最大化!涉及到了 TCP 和视频编码:对 TCP 来说就是控制单位时间内发送数据包的数据量,对编码来说就是控制单位时间内输出数据的数据量。 TCP 的限制条件是网络带宽,流控就是在避免造成或者加剧网络拥塞的前提下,尽可能利用网络带宽。带宽够、网络好,我们就加快速度发送数据包,出现了延迟增大、丢包之后,就放慢发包的速度(因为继续高速发包,可能会加剧网络拥塞,反而发得更慢)。 视频编码的限制条件最初是解码器的能力,码率太高就会无法解码,后来随着 codec 的发展,解码能力不再是瓶颈,限制条件变成了传输带宽/文件大小,我们希望在控制数据量的前提下,画面质量尽可能高。 一般编码器都可以设置一个目标码率,但编码器的实际输出码率不会完全符合设置,因为在编码过程中实际可以控制的并不是最终输出的码率,而是编码过程中的一个量化参数(Quantization Parameter,QP),它和码率并没有固定的关系,而是取决于图像内容。 这一点不在这里展开,感兴趣的朋友可以阅读视频压缩编码和音频压缩编码的基本原理。 无论是要发送的 TCP 数据包,还是要编码的图像,都可能出现“尖峰”,也就是短时间内出现较大的数据量。TCP 面对尖峰,可以选择不为所动(尤其是网络已经拥塞的时候),这没有太大的问题,但如果视频编码也对尖峰不为所动,那图像质量就会大打折扣了。如果有几帧数据量特别大,但仍要把码率控制在原来的水平,那势必要损失更多的信息,因此图像失真就会更严重。这种情况通常的表现是画面出现很多小方块,看上去像是打了马赛克一样,导致画面的局部或者整体看不清楚的情况。 Android 硬编码流控 MediaCodec 流控相关的接口并不多,一是配置时设置目标码率和码率控制模式,二是动态调整目标码率(Android 19+)。配置时指定目标码率和码率控制模式: mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, bitRate); mediaFormat.setInteger(MediaFormat.KEY_BITRATE_MODE, MediaCodecInfo.EncoderCapabilities.BITRATE_MODE_VBR); mMediaCodec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE); 码率控制模式有三种:在 MediaCodecInfo.EncoderCapabilities类中定义了三种,在 framework 层有另一套名字和它们的值一一对应: CQ 对应于 OMX_Video_ControlRateDisable,它表示完全不控制码率,尽最大可能保证图像质量; CBR 对应于 OMX_Video_ControlRateConstant,它表示编码器会尽量把输出码率控制为设定值,即我们前面提到的“不为所动”; VBR 对应于 OMX_Video_ControlRateVariable,它表示编码器会根据图像内容的复杂度(实际上是帧间变化量的大小)来动态调整输出码率,图像复杂则码率高,图像简单则码率低。 动态调整目标码率: Bundle param = new Bundle(); param.putInt(MediaCodec.PARAMETER_KEY_VIDEO_BITRATE, bitrate); mediaCodec.setParameters(param); Android 流控策略选择: 质量要求高、不在乎带宽、解码器支持码率剧烈波动的情况下,可以选择 CQ 码率控制策略。 VBR 输出码率会在一定范围内波动,对于小幅晃动,方块效应会有所改善,但对剧烈晃动仍无能为力;连续调低码率则会导致码率急剧下降,如果无法接受这个问题,那 VBR 就不是好的选择。 编码例子 下面展示使用MediaExtractor获取数据后,用MediaMuxer重新写成一个MP4文件的简单例子。 private void doExtract() throws IOException { MediaCodec.BufferInfo info = new MediaCodec.BufferInfo(); boolean outputDone = false; boolean inputDone = false; while (!outputDone) { if (!inputDone) { int inputBufIndex = mMediaCodec.dequeueInputBuffer(10000); if (inputBufIndex >= 0) { ByteBuffer inputBuf = mMediaCodec.getInputBuffers()[inputBufIndex]; int chunkSize = mMediaExtractor.readSampleData(inputBuf, 0); if (chunkSize < 0) { mMediaCodec.queueInputBuffer(inputBufIndex, 0, 0, 0L, MediaCodec.BUFFER_FLAG_END_OF_STREAM); inputDone = true; } else { mMediaCodec.queueInputBuffer(inputBufIndex, 0, chunkSize, mMediaExtractor.getSampleTime(), 0); mMediaExtractor.advance(); } } } if (!outputDone) { int decoderStatus =mMediaCodec.dequeueOutputBuffer(mBufferInfo, 10000); if (decoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) { Log.d(TAG, "no output from decoder available"); } else if (decoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) { Log.d(TAG, "decoder output buffers changed"); } else if (decoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) { MediaFormat newFormat = SDecoder.decoder.getOutputFormat(); Log.d(TAG, "decoder output format changed: " + newFormat); } else { if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) { mMediaCodec.releaseOutputBuffer(outputBufferIndex,false); outputDone = true; break; } // 获取一个只读的输出缓存区inputBuffer ,它包含被编码好的数据 ByteBuffer outputBuffer = null; if (Build.VERSION.SDK_INT < Build.VERSION_CODES.LOLLIPOP) { outputBuffer = mMediaCodec.getOutputBuffers()[outputBufferIndex]; } else { outputBuffer = mMediaCodec.getOutputBuffer(outputBufferIndex); } if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) { MediaFormat format = mMediaCodec.getOutputFormat(); format.setByteBuffer("csd-0",outputBuffer); mBufferInfo.size = 0; } // 如果API<=19,需要根据BufferInfo的offset偏移量调整ByteBuffer的位置 // 并且限定将要读取缓存区数据的长度,否则输出数据会混乱 if (mBufferInfo.size != 0) { if (Build.VERSION.SDK_INT <= Build.VERSION_CODES.KITKAT) { outputBuffer.position(mBufferInfo.offset); outputBuffer.limit(mBufferInfo.offset + mBufferInfo.size); } // mMuxer.writeSampleData(mTrackIndex, encodedData, bufferInfo); } mMediaCodec.releaseOutputBuffer(decoderStatus, false); } } } } 文章来源:https://blog.csdn.net/gb702250823/article/details/81627503 读取更多关于Android原生编解码接口 MediaCodec 完全解析 »
• MediaCodec官方文档译文
MediaCodec MediaCodec类可以用来访问底层媒体编解码器,即编码器/解码器的组件。 它是Android底层多媒体支持架构的一部分(通常与MediaExtractor,MediaSync,MediaMuxer,MediaCrypto,MediaDrm,Image,Surface和AudioTrack一起使用)。 从广义上讲,一个编解码器处理输入数据以生成输出数据。 它异步地处理数据,并使用一组输入和输出缓冲器。 从一个简单的层面上看,可请求(或接收)一个空的输入缓冲器,然后用数据填满它,并将其发送到编解码器去处理。 编解码器使用这些数据并转换这些数据到它某个空的输出缓冲区。 最后,您请求(或接收)一个已填充数据的输出缓冲区,消耗其内容并将其释放回并回到编解码器。 数据类型(Data Types) 编解码器对3种数据进行操作:压缩后的数据,原始音频数据和原始视频数据。可以使用ByteBuffers处理所有三种数据,但对原始视频数据,您应该使用Surface以提高编解码的性能。Surface使用本地视频缓冲区而不是映射或复制到ByteBuffers,因此,效率更高。 通常在使用Surface时无法访问原始视频数据,但您可以使用ImageReader类访问不安全的解码(原始)视频帧。 这可能比使用ByteBuffers更有效,因为一些本机缓冲区可能被映射到直接ByteBuffers。 当使用ByteBuffer模式时,您可以使用Image类和getInput / OutputImage(int)访问原始视频帧。 压缩缓冲区(Compressed Buffers) 输入缓冲器(用于解码器)和输出缓冲器(用于编码器)根据格式的类型来存放已压缩的数据。 对于视频类型,这是一个单一的压缩后的视频帧。 对于音频数据,这通常是单个访问单元(一个编码后的音频片段段通常包含由格式类型指定的几毫秒音频),但是这个要求可以稍微放宽,因为缓冲器可以包含多个编码的音频访问单元。 在任一情况下,缓冲区不会以随意的字节边界开始或结束,而是在帧/访问单元边界上开始或者结束。 原始音频缓冲器(Raw Audio Buffers) 原始音频缓冲器包含PCM音频数据的整个帧,它是每个通道样本的集合,每个样本按照通道顺序排。 每个样本是一个16位有符号整数,按照本机字节顺序排列。 short[] getSamplesForChannel(MediaCodec codec, int bufferId, int channelIx) { ByteBuffer outputBuffer = codec.getOutputBuffer(bufferId); MediaFormat format = codec.getOutputFormat(bufferId); ShortBuffer samples = outputBuffer.order(ByteOrder.nativeOrder()).asShortBuffer(); int numChannels = formet.getInteger(MediaFormat.KEY_CHANNEL_COUNT); if (channelIx < 0 || channelIx >= numChannels) { return null; } short[] res = new short[samples.remaining() / numChannels]; for (int i = 0; i < res.length; ++i) { res[i] = samples.get(i * numChannels + channelIx); } return res; } 原始视频缓冲区(Raw Video Buffers) 在ByteBuffer模式下,视频缓冲区根据其颜色格式进行布局。 您可以通过 getCodecInfo().getCapabilitiesForType(...).colorFormats 获取支持的颜色格式数组。 视频编解码器可能支持三种颜色格式: 原生原始视频格式:由COLOR_FormatSurface标记,可以与输入或输出Surface一起使用。 灵活的YUV缓冲(如COLOR_FormatYUV420Flexible):这些可以通过使用getInput / OutputImage(int)与输入/输出Surface以及ByteBuffer模式一起使用。 其他或特定格式:这些通常只在字节缓冲区模式支持。 一些颜色格式是供应商特定的。 其他的在 MediaCodecInfo.CodecCapabilities中定义。 对于等同于灵活格式的颜色格式,您仍然可以使用 getInput/OutputImage(int)。 从LOLLIPOP_MR1开始,所有视频编解码器都支持灵活的YUV 4:2:0缓冲区。 访问旧设备上的原始视频字节缓冲区 在LOLLIPOP和Image支持之前,您需要使用KEY_STRIDE和KEY_SLICE_HEIGHT输出格式值来了解原始输出缓冲区的布局。 请注意,在某些设备上,切片高度被标示为0.这可能意味着切片高度与帧高度相同,或者切片高度与帧高度对齐为某个值(通常是2的幂)。 不幸的是,在这种情况下,没有标准和简单的方式来说明实际的切片高度。此外,平面格式的U平面的垂直跨度也没有被指定或定义,尽管通常它是切片高度的一半。 KEY_WIDTH和KEY_HEIGHT键指定视频帧的大小; 然而,对于大多数包络,视频(图片)仅占据视频帧的一部分。 这由“裁剪矩形”表示。 您需要使用以下键从输出格式获取原始输出图像的裁剪矩形。 如果这些键不存在,则视频占据整个视频帧。在应用任何旋转之前,裁剪矩形在输出帧应结合上下文中来理解。 格式键 类型 描述 “crop-left” Integer The left-coordinate (x) of the crop rectangle “crop-top” Integer The top-coordinate (y) of the crop rectangle “crop-right” Integer The right-coordinate (x) MINUS 1 of the crop rectangle “crop-bottom” Integer The bottom-coordinate (y) MINUS 1 of the crop rectangle 右侧和底部坐标可以被理解为裁剪输出图像的最右侧的有效的列/底部最有效行的坐标。 视频帧的大小(旋转之前)可以这样计算: MediaFormat format = decoder.getOutputFormat(…); int width = format.getInteger(MediaFormat.KEY_WIDTH); if (format.containsKey("crop-left") && format.containsKey("crop-right")) { width = format.getInteger("crop-right") + 1 - format.getInteger("crop-left"); } int height = format.getInteger(MediaFormat.KEY_HEIGHT); if (format.containsKey("crop-top") && format.containsKey("crop-bottom")) { height = format.getInteger("crop-bottom") + 1 - format.getInteger("crop-top"); } 另请注意,BufferInfo.offset的含义在设备之间不一致。 在某些设备上,偏移指向裁剪矩形的左上角像素,而在大多数设备上,它指向整个帧的左上角像素。 状态(States) 在其生命周期中,编解码器在概念上处于三种状态中的某一个:停止,执行或释放。 停止状态实际上是三个状态的集合:未初始化,配置就绪和出错,而执行状态在概念上通过三个子状态进行:刷新,运行和结束。 当您使用出厂方法之一创建编解码器时,编解码器处于未初始化状态。 首先,您需要通过configure(…)进行配置,使其进入Configured状态,然后调用start()让其进入到执行状态。 在这种状态下,您可以通过上述缓冲区队列操作来处理数据。 执行状态有三个子状态:已刷新,运行和流结束。 在start()后,编解码器立即处于Flushed子状态,它拥有所有的缓冲区。 一旦第一个输入缓冲区出出队,编解码器就会移动到运行的子状态,生命周期中的大部分时间都处于该状态。 当您对输入缓冲区入队时带有流末尾标记时,编解码器将转换到流末端子状态。 在这种状态下,编解码器不再接受以后的输入缓冲器,但仍然产生输出缓冲器,直到输出端到达流末尾。 在执行状态下,可以随时使用flush()来回到Flushed子状态。 调用stop()将编解码器返回到未初始化状态,然后可以重新配置。 使用完毕编解码器后,您必须通过调用release()来释放它。 在极少数情况下,编解码器可能会遇到错误并移动到错误状态。 这通过来自排队操作的无效返回值告知,或有时通过异常来传递。 调用reset()使编解码器再次可用。 您可以从任何状态调用它,将编解码器移回未初始化状态。 否则,调用release()到最终的释放状态。 创建(Creation) 使用MediaCodecList为特定的MediaFormat创建一个MediaCodec。 解码文件或流时,可以从MediaExtractor.getTrackFormat获取所需的格式。 使用MediaFormat.setFeatureEnabled添加任何特定的特性,然后调用MediaCodecList.findDecoderForFormat得到可以处理特定媒体格式的编解码器的名称。 最后,使用createByCodecName(String)创建编解码器。 注意:在LOLLIPOP上,MediaCodecList.findDecoder / EncoderForFormat的格式不能包含帧速率。 使用format.setString(MediaFormat.KEY_FRAME_RATE,null)以格式清除任何现有的帧速率设置。 您也可以使用createDecoder / EncoderByType(String)为特定的MIME类型创建首选编解码器。 然而,这不能用于添加特性,并且可能创建无法处理特定所需媒体格式的编解码器。 创建安全的解码器 在KITKAT_WATCH及更早版本上,安全编解码器可能未列在MediaCodecList中,但仍可能在系统上可用。 存在的安全编解码器可以通过名称实例化,通过将“.secure”附加到常规编解码器的名称(所有安全编解码器的名称必须以“.secure”结尾)。如果编解码器在系统上不存在,createByCodecName(String)将抛出IOException。 从LOLLIPOP开始,您应该使用媒体格式的FEATURE_SecurePlayback功能来创建安全解码器。 初始化(Initialization) 创建编解码器后,如果要异步处理数据,则可以使用setCallback设置回调。 然后,使用特定的媒体格式配置编解码器。 这时您可以为视频生产者—生成原始视频数据的编解码器(例如视频解码器)指定输出的Surface。这时您可以设置安全编解码器的解密参数(请参阅MediaCrypto)。最后,由于某些编解码器可以在多种模式下运行,因此您必须指定是否要将其用作解码器或编码器。 从LOLLIPOP开始,您可以在Configured状态下查询生成的输入和输出格式。在启动编解码器之前,您可以使用它来验证结果配置,例如颜色格式。 如果您想要与视频消费者——一个处理原始视频输入的编解码器(如视频编码器)本地处理原始输入视频缓冲区,可在配置后使用createInputSurface()创建输入目标Surface。 或者,通过调用setInputSurface(Surface)来设置编解码器以使用先前创建的持久输入Surface。 编解码器特定的数据(Codec-specific Data) 一些格式,特别是AAC音频和MPEG4,H.264和H.265视频格式要求实际数据以包含设置数据或编解码器特定数据的缓冲区为前缀。 处理这种压缩格式时,必须在start()之后和任何帧数据之前将该数据提交到编解码器。 在调用queueInputBuffer时,必须使用标志BUFFER_FLAG_CODEC_CONFIG标记此类数据。 Codec特定的数据也可以以传递给配置的格式包含在带有“csd-0”,“csd-1”等字符串的ByteBuffer条目中。这些键始终包含在从MediaExtractor获取的MediaFormat中。 在其中的编解码器特定数据在start()时自动提交到编解码器;您不能显示地提交此数据。 如果结构不包含编解码器特定数据,您可以根据格式要求选择使用指定数量的缓冲区以正确的顺序提交它。 在H.264 AVC的情况下,您还可以连接所有编解码器专用数据,并将其作为单个编解码器配置缓冲区提交。 Android使用以下编解码器特定的数据缓冲区。 这些还需要按照适合MediaMuxer轨道配置的轨道格式进行设置。 每个参数集和标有(*)的编解码器特定数据段必须以起始代码“\x00\x00\x00\x01”开头。 Format CSD buffer #0 CSD buffer #1 CSD buffer #2 AAC Decoder-specific information from ESDS* Not Used Not Used VORBIS Identification header Setup header Not Used OPUS Identification header Pre-skip in nanosecs(unsigned 64-bit native-order integer.)This overrides the pre-skip value in the identification header. Seek Pre-roll in nanosecs(unsigned 64-bit native-order integer.) MPEG-4 Decoder-specific information from ESDS* Not Used Not Used H.264 AVC SPS (Sequence Parameter Sets*) PPS (Picture Parameter Sets*) Not Used H.265 HEVC VPS (Video Parameter Sets*) +SPS (Sequence Parameter Sets*) +PPS (Picture Parameter Sets*) Not Used Not Used VP9 VP9 CodecPrivate Data (optional) Not Used Not Used 注意:当编解码器在启动后立刻或稍后不久久刷新时,当任何输出缓冲区或输出格式的更改返回之前,必须要小心,因为在刷新过程中编解码器特定数据可能会丢失。 您必须在这样的刷新之后使用标记为BUFFER_FLAG_CODEC_CONFIG的缓冲区重新提交数据,以确保正确的编解码器操作。 编码器(或产生压缩数据的编解码器)将创建带有codec-config标志的输出缓冲器,并在其中存放配置编解码器特定的数据并返回,这个动作将在输出任何存放有效数据的缓冲器之前进行。 包含编解码器特定数据的缓冲区的时间戳是没有意义的。 数据处理(Data Processing) 每一个编解码器拥有一套在API调用中由buffer-ID引用的输入和输出缓冲器,在一次对start()成功调用后,客户端“拥有”既不输入也不输出的缓冲区。 在同步模式下,调用dequeueInput / OutputBuffer(…)从编解码器获取(获得)输入或输出缓冲区的所有权。 在异步模式下,您将通过MediaCodec.Callback.onInput / OutputBufferAvailable(…)回调自动接收可用的缓冲区。 在获得输入缓冲区时,填写数据并使用queueInputBuffer或queueSecureInputBuffer将其提交到编解码器(如果使用解密)。不要提交具有相同时间戳的多个输入缓冲区(除非是特定于编解码器的数据)。 反过来该编解码器将在异步模式下经由onOutputBufferAvailable回调返回只读输出缓冲器,或者响应于在同步模式中的dequeuOutputBuffer呼叫。 输出缓冲器已被处理后,调用的releaseOutputBuffer方法中的一个将缓冲器返回到编解码器。 尽管你不需要立即重新提交/释放缓冲器到编解码器,但持续持有输入或输出缓冲器会使编解码器停止,而这种行为是设备相关的。 具体来说,编解码器可能会在生成输出缓冲区之前停止,直到所有未完成的缓冲区被释放/重新提交为止。 因此,尽可能短地持有可用的缓冲区。 取决于API版本,您可以通过三种方式处理数据: Processing Mode API version <= 20 Jelly Bean/KitKat API version >= 21 Lollipop and later Synchronous API using buffer arrays Supported Deprecated Synchronous API using buffers Not Available Supported Asynchronous API using buffers Not Available Supported 使用Buffers异步地处理 从LOLLIPOP开始,优选的方法是在配置之前设置一个回调然后采用异步方式来处理数据。 异步模式稍微地更改状态,因为必须在flush()之后调用start()将编解码器转换为r运行子状态并开始接收输入缓冲区。 类似地,在初始调用启动编解码器将直接移动到运行子状态,并通过回调开始传递可用的输入缓冲区。 MediaCodec使用异步模式的典型例子如下: MediaCodec codec = MediaCodec.createByCodecName(name); MediaFormat mOutputFormat; // member variable codec.setCallback(new MediaCodec.Callback() { @Override void onInputBufferAvailable(MediaCodec mc, int inputBufferId) { ByteBuffer inputBuffer = codec.getInputBuffer(inputBufferId); // fill inputBuffer with valid data … codec.queueInputBuffer(inputBufferId, …); } @Override void onOutputBufferAvailable(MediaCodec mc, int outputBufferId, …) { ByteBuffer outputBuffer = codec.getOutputBuffer(outputBufferId); MediaFormat bufferFormat = codec.getOutputFormat(outputBufferId); // option A // bufferFormat is equivalent to mOutputFormat // outputBuffer is ready to be processed or rendered. … codec.releaseOutputBuffer(outputBufferId, …); } @Override void onOutputFormatChanged(MediaCodec mc, MediaFormat format) { // Subsequent data will conform to new format. // Can ignore if using getOutputFormat(outputBufferId) mOutputFormat = format; // option B } @Override void onError(…) { … } }); codec.configure(format, …); mOutputFormat = codec.getOutputFormat(); // option B codec.start(); // wait for processing to complete codec.stop(); codec.release(); 使用Buffers同步地处理 从LOLLIPOP开始,即使在同步模式下使用编解码器,也应该使用getInput / OutputBuffer(int)和/或getInput / OutputImage(int)来检索输入和输出缓冲区。 这允许框架进行某些优化,例如。 处理动态内容时。 如果您调用getInput /OutputBuffers(),则此优化将被禁用。 注意:不要同时混合使用缓冲区和缓冲区数组的方法。 具体来说,只有在start()之后才能直接调用getInput / OutputBuffers,或者在输出缓冲区ID的值为INFO_OUTPUT_FORMAT_CHANGED之后才调用。 MediaCodec使用同步模式的典型例子如下: MediaCodec codec = MediaCodec.createByCodecName(name); codec.configure(format, …); MediaFormat outputFormat = codec.getOutputFormat(); // option B codec.start(); for (;;) { int inputBufferId = codec.dequeueInputBuffer(timeoutUs); if (inputBufferId >= 0) { ByteBuffer inputBuffer = codec.getInputBuffer(…); // fill inputBuffer with valid data … codec.queueInputBuffer(inputBufferId, …); } int outputBufferId = codec.dequeueOutputBuffer(…); if (outputBufferId >= 0) { ByteBuffer outputBuffer = codec.getOutputBuffer(outputBufferId); MediaFormat bufferFormat = codec.getOutputFormat(outputBufferId); // option A // bufferFormat is identical to outputFormat // outputBuffer is ready to be processed or rendered. … codec.releaseOutputBuffer(outputBufferId, …); } else if (outputBufferId == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) { // Subsequent data will conform to new format. // Can ignore if using getOutputFormat(outputBufferId) outputFormat = codec.getOutputFormat(); // option B } } codec.stop(); codec.release(); 使用Buffer数组同步地处理(已过期) 在版本KITKAT_WATCH和之前,输入和输出缓冲区的集合由ByteBuffer[] 数组表示。 在成功调用start()之后,使用getInput/OutputBuffers()获得缓冲区数组。如下面的示例所示,使用缓冲区ID作为这些数组中的索引(当为非负数时)。 请注意,数组的大小与系统使用的输入和输出缓冲区的数量之间没有固有的相关性,尽管数组大小有一个上限。 MediaCodec codec = MediaCodec.createByCodecName(name); codec.configure(format, …); codec.start(); ByteBuffer[] inputBuffers = codec.getInputBuffers(); ByteBuffer[] outputBuffers = codec.getOutputBuffers(); for (;;) { int inputBufferId = codec.dequeueInputBuffer(…); if (inputBufferId >= 0) { // fill inputBuffers[inputBufferId] with valid data … codec.queueInputBuffer(inputBufferId, …); } int outputBufferId = codec.dequeueOutputBuffer(…); if (outputBufferId >= 0) { // outputBuffers[outputBufferId] is ready to be processed or rendered. … codec.releaseOutputBuffer(outputBufferId, …); } else if (outputBufferId == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) { outputBuffers = codec.getOutputBuffers(); } else if (outputBufferId == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) { // Subsequent data will conform to new format. MediaFormat format = codec.getOutputFormat(); } } codec.stop(); codec.release(); 处理流末尾(End-of-stream Handling) 当您到达输入数据的末尾时,必须通过在对queueInputBuffer的调用中指定BUFFER_FLAG_END_OF_STREAM标志来将其发送到编解码器。 您可以在最后一个有效的输入缓冲区中执行此操作,也可以通过提交一个额外的空输入缓冲区来设置流终止标志。 如果使用空的缓冲区,时间戳将被忽略。 编解码器将继续返回输出缓冲区,直到最终通过在dequeueOutputBuffer中设置的MediaCodec.BufferInfo中指定相同的流出尾标志,或通过onOutputBufferAvailable返回信号来输出流的结尾。 这可以在最后一个有效的输出缓冲区上设置,也可以在最后一个有效输出缓冲区之后的空缓冲区中设置。 这样的空缓冲区的时间戳应该被忽略。 在信号输入流结束后,请勿提交其他输入缓冲区,除非编解码器已被刷新,或停止并重新启动。 使用一个输出Surface(Using an Output Surface) 当使用输出Surface时,数据处理与ByteBuffer模式几乎相同; 然而,输出缓冲区将不可访问,并被表示为空值。 例如,getOutputBuffer/Image(int)将返回null,而getOutputBuffers()将返回一个仅包含null的数组。 使用输出Surface时,可以选择是否在Surface上渲染每个输出缓冲区。 你有三个选择: 不要渲染缓冲区:调用releaseOutputBuffer(bufferId,false)。 使用默认时间戳渲染缓冲区:调用releaseOutputBuffer(bufferId,true)。 使用特定时间戳渲染缓冲区:调用releaseOutputBuffer(bufferId,timestamp) 从M开始,默认时间戳是缓冲区的显示时间戳(转换为纳秒),之前没有定义。 此外,从M开始,您可以使用setOutputSurface动态更改输出Surface。 在渲染到Surface上时的变换 如果编解码器配置为Surface模式,任何裁剪矩形,旋转和视频缩放模式将被自动应用,但有一个例外: 在Android M之前,当渲染到Surface上时,软件解码器可能没有应用旋转。 不幸的是,没有标准和简单的方法来识别软件解码器,或者除了通过尝试外,它们应用旋转。 还有一些注意事项。 请注意,当在Surface上显示输出时,不考虑像素长宽比。 这意味着如果您使用VIDEO_SCALING_MODE_SCALE_TO_FIT模式,则必须定位输出的Surface,使其具有正确的最终显示宽高比。 相反,您只能对具有方形像素(像素宽高比或1:1)的内容使用VIDEO_SCALING_MODE_SCALE_TO_FIT_WITH_CROPPING模式。 另请注意,从Android N开始,VIDEO_SCALING_MODE_SCALE_TO_FIT_WITH_CROPPING模式在视频旋转90或270度时可能无法正常工作。 当设置视频缩放模式时,请注意,每当输出缓冲区更改后,必须进行Reset。 由于已不推荐使用INFO_OUTPUT_BUFFERS_CHANGED事件,因此可以在每次输出格式更改后执行此操作(Reset操作)。 使用Surface作为输入 当使用Surface作为输入时,没有输入缓冲区可供访问,因为缓冲区会从输入Surface自动传递到编解码器。 调用dequeueInputBuffer将抛出一个IllegalStateException,getInputBuffers()将返回一个伪造的ByteBuffer[]数组,切记不要去对该数组执行写入操作 调用signalEndOfInputStream来发送流末尾信号。 输入Surface将在此调用后立即停止向编解码器提交数据。 对选进度和自适应播放的支持 视频解码器(以及消耗经过压缩的视频数据的一般编解码器)的行为不同关于寻求和格式改变他们是否支持并配置了自适应播放。 您可以通过CodecCapabilities.isFeatureSupported(String)来检查解码器是否支持自适应播放。 支持自适应播放的视频解码器仅在将其的输出配置为Surface时才能使用自适应播放。 流边界和关键帧 很重要的一点,在start()或flush()之后的输入数据开始于合适的流边界:第一帧必须是关键帧。关键帧可以自己完全解码(对于大多数编解码器,这代表I帧),并且在关键帧之后,没有帧会以该关键帧之前的帧作为参考帧。 下表总结了各种视频格式的合适关键帧。 Format Suitable key frame VP9/VP8 a suitable intraframe where no subsequent frames refer to frames prior to this frame.(There is no specific name for such key frame.) H.265 HEVC IDR or CRA H.264 AVC IDR MPEG-4 H.263 MPEG-2 a suitable I-frame where no subsequent frames refer to frames prior to this frame.(There is no specific name for such key frame.) 对于不支持自适应播放的解码器(包括解码输出不是Surface的情况) 为了开始解码与先前提交的数据不相邻的数据(即在进度拖动之后),您必须刷新解码器。由于所有输出缓冲区在刷新点立即被撤销,因此您可能需要首先发出信号,然后在等待流末尾到来再调用flush。 重要的是,刷新后的输入数据起始于合适的流边界/关键帧。 注意:数据的格式在一次flush后不得改变;flush()不支持格式不连续;为此,需要一个完整的stop()-configure()-start()过程。 另请注意:如果在start()之后很快刷新编解码器—— 一般来说,在接收到第一个输出缓冲区或接收到输出格式更改之前,您将需要重新提交编解码器特定数据到编解码器。有关更多信息,请参阅编解码器特定数据部分。 对于支持并且已配置为自适应播放的解码器 为了开始解码与先前提交的数据不相邻的数据(即,在寻找之后),不需要刷新解码器; 然而,在中断之后的输入数据必须在一个合适的流边界/关键帧开始。 对于某些视频格式,即H.264,H.265,VP8和VP9,也可以在流中部改变画面大小或配置。为此,您必须将整个新的编解码器特定配置数据与关键帧一起打包到单个缓冲区(包括起始代码)中,并将其作为常规输入缓冲区提交。 在画面尺寸发生改变之后,任何新尺寸的帧返回之前,您将通过dequeueOutputBuffer的返回值或onOutputFormatChanged的回调收到一个INFO_OUTPUT_FORMAT_CHANGED。 注意:就像编解码器特定的数据一样,在更改图片大小后不久调用flush()时要小心。 如果您没有收到图片尺寸更改的确认,您将需要重复提交更改图片大小的请求。 错误处理 工厂方法createByCodecName和createDecoder / EncoderByType在失败时抛出IOException,您必须捕获或声明向上传递该异常。当调用某个方法时,编解码器正好处于不能调用该方法的状态时,MediaCodec的方法会抛出IllegalStateException;这通常是由于应用程序API使用不当导致的。涉及安全缓冲区的方法可能会抛出MediaCodec.CryptoException,错误信息可从getErrorCode()获取。 在应用程序正确调用了API时,编解码器的内部错误会引起MediaCodec.CodecException,导致该异常的原因可能为媒体内容损坏、硬件故障、资源耗尽等。 接收到CodecException时推荐的操作可通过调用isRecoverable()和isTransient()来确定: 可恢复的错误:如果isRecoverable()返回true,则调用stop(),configure(…)和start()来恢复。 瞬时错误:如果isTransient()返回true,则资源暂时不可用,并且可能会在稍后重试该方法。 致命错误:如果isRecoverable()和isTransient()都返回false,则CodecException是致命的,并且必须重置或释放编解码器。 isRecoverable()和isTransient()都不会同时返回true。 版权所有:https://blog.csdn.net/YSSJZ960427031 读取更多关于MediaCodec官方文档译文 »