安卓 - 如何混合音频文件和视频文件?

8
我有一个从麦克风录制的3gp文件和一个mp4视频文件。 我想将音频文件和视频文件混合成一个mp4文件并保存。 我搜索了很多但没有找到任何有用的使用Android的MediaMuxer API的资料。 MediaMuxer api 更新:这是我的方法,用于混合两个文件,但我在其中遇到了异常。 原因是目标mp4文件没有任何轨道! 有人可以帮我添加音频和视频轨道到混合器中吗?
异常信息:
java.lang.IllegalStateException: Failed to stop the muxer

我的代码:

private void cloneMediaUsingMuxer( String dstMediaPath) throws IOException {
    // Set up MediaExtractor to read from the source.
    MediaExtractor soundExtractor = new MediaExtractor();
    soundExtractor.setDataSource(audioFilePath);
    MediaExtractor videoExtractor = new MediaExtractor();
    AssetFileDescriptor afd2 = getAssets().openFd("Produce.MP4");
    videoExtractor.setDataSource(afd2.getFileDescriptor() , afd2.getStartOffset(),afd2.getLength());


    //PATH
    //extractor.setDataSource();
    int trackCount = soundExtractor.getTrackCount();
    int trackCount2 = soundExtractor.getTrackCount();

    //assertEquals("wrong number of tracks", expectedTrackCount, trackCount);
    // Set up MediaMuxer for the destination.
    MediaMuxer muxer;
    muxer = new MediaMuxer(dstMediaPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
    // Set up the tracks.
    HashMap<Integer, Integer> indexMap = new HashMap<Integer, Integer>(trackCount);
    for (int i = 0; i < trackCount; i++) {
        soundExtractor.selectTrack(i);
        MediaFormat SoundFormat = soundExtractor.getTrackFormat(i);
        int dstIndex = muxer.addTrack(SoundFormat);
        indexMap.put(i, dstIndex);
    }

    HashMap<Integer, Integer> indexMap2 = new HashMap<Integer, Integer>(trackCount2);
    for (int i = 0; i < trackCount2; i++) {
        videoExtractor.selectTrack(i);
        MediaFormat videoFormat = videoExtractor.getTrackFormat(i);
        int dstIndex2 = muxer.addTrack(videoFormat);
        indexMap.put(i, dstIndex2);
    }


    // Copy the samples from MediaExtractor to MediaMuxer.
    boolean sawEOS = false;
    int bufferSize = MAX_SAMPLE_SIZE;
    int frameCount = 0;
    int offset = 100;
    ByteBuffer dstBuf = ByteBuffer.allocate(bufferSize);
    MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
    MediaCodec.BufferInfo bufferInfo2 = new MediaCodec.BufferInfo();

    muxer.start();
    while (!sawEOS) {
        bufferInfo.offset = offset;
        bufferInfo.size = soundExtractor.readSampleData(dstBuf, offset);
        bufferInfo2.offset = offset;
        bufferInfo2.size = videoExtractor.readSampleData(dstBuf, offset);

        if (bufferInfo.size < 0) {
            sawEOS = true;
            bufferInfo.size = 0;
            bufferInfo2.size = 0;
        }else if(bufferInfo2.size < 0){
            sawEOS = true;
            bufferInfo.size = 0;
            bufferInfo2.size = 0;
        }
        else {
            bufferInfo.presentationTimeUs = soundExtractor.getSampleTime();
            bufferInfo2.presentationTimeUs = videoExtractor.getSampleTime();
            //bufferInfo.flags = extractor.getSampleFlags();
            int trackIndex = soundExtractor.getSampleTrackIndex();
            int trackIndex2 = videoExtractor.getSampleTrackIndex();
            muxer.writeSampleData(indexMap.get(trackIndex), dstBuf,
                    bufferInfo);

            soundExtractor.advance();
            videoExtractor.advance();
            frameCount++;

        }
    }

    Toast.makeText(getApplicationContext(),"f:"+frameCount,Toast.LENGTH_SHORT).show();

    muxer.stop();
    muxer.release();

}

更新2:问题已解决!请查看我的问题答案。
感谢您的帮助。

你愿意使用NDK吗?还是希望方案纯粹使用Java? - StephenG
任何能解决问题的东西都是完美的。我认为纯Java和MediaMuxer更好。 - mohamad ali gharat
你能提供更多关于异常的细节吗?如果你查看详细设置,logcat 中应该会提供来自 MediaMuxer 的异常代码。 - StephenG
4个回答

25

我遇到了音视频文件的轨道问题,它们消失了。我的代码没有问题,但现在你可以用它来合并音频和视频文件

代码:

private void muxing() {

String outputFile = "";

try {

    File file = new File(Environment.getExternalStorageDirectory() + File.separator + "final2.mp4");
    file.createNewFile();
    outputFile = file.getAbsolutePath();

    MediaExtractor videoExtractor = new MediaExtractor();
    AssetFileDescriptor afdd = getAssets().openFd("Produce.MP4");
    videoExtractor.setDataSource(afdd.getFileDescriptor() ,afdd.getStartOffset(),afdd.getLength());

    MediaExtractor audioExtractor = new MediaExtractor();
    audioExtractor.setDataSource(audioFilePath);

    Log.d(TAG, "Video Extractor Track Count " + videoExtractor.getTrackCount() );
    Log.d(TAG, "Audio Extractor Track Count " + audioExtractor.getTrackCount() );

    MediaMuxer muxer = new MediaMuxer(outputFile, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);

    videoExtractor.selectTrack(0);
    MediaFormat videoFormat = videoExtractor.getTrackFormat(0);
    int videoTrack = muxer.addTrack(videoFormat);

    audioExtractor.selectTrack(0);
    MediaFormat audioFormat = audioExtractor.getTrackFormat(0);
    int audioTrack = muxer.addTrack(audioFormat);

    Log.d(TAG, "Video Format " + videoFormat.toString() );
    Log.d(TAG, "Audio Format " + audioFormat.toString() );

    boolean sawEOS = false;
    int frameCount = 0;
    int offset = 100;
    int sampleSize = 256 * 1024;
    ByteBuffer videoBuf = ByteBuffer.allocate(sampleSize);
    ByteBuffer audioBuf = ByteBuffer.allocate(sampleSize);
    MediaCodec.BufferInfo videoBufferInfo = new MediaCodec.BufferInfo();
    MediaCodec.BufferInfo audioBufferInfo = new MediaCodec.BufferInfo();


    videoExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
    audioExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);

    muxer.start();

    while (!sawEOS)
    {
        videoBufferInfo.offset = offset;
        videoBufferInfo.size = videoExtractor.readSampleData(videoBuf, offset);


        if (videoBufferInfo.size < 0 || audioBufferInfo.size < 0)
        {
            Log.d(TAG, "saw input EOS.");
            sawEOS = true;
            videoBufferInfo.size = 0;

        }
        else
        {
            videoBufferInfo.presentationTimeUs = videoExtractor.getSampleTime();
            videoBufferInfo.flags = videoExtractor.getSampleFlags();
            muxer.writeSampleData(videoTrack, videoBuf, videoBufferInfo);
            videoExtractor.advance();


            frameCount++;
            Log.d(TAG, "Frame (" + frameCount + ") Video PresentationTimeUs:" + videoBufferInfo.presentationTimeUs +" Flags:" + videoBufferInfo.flags +" Size(KB) " + videoBufferInfo.size / 1024);
            Log.d(TAG, "Frame (" + frameCount + ") Audio PresentationTimeUs:" + audioBufferInfo.presentationTimeUs +" Flags:" + audioBufferInfo.flags +" Size(KB) " + audioBufferInfo.size / 1024);

        }
    }

    Toast.makeText(getApplicationContext() , "frame:" + frameCount , Toast.LENGTH_SHORT).show();



    boolean sawEOS2 = false;
    int frameCount2 =0;
    while (!sawEOS2)
    {
        frameCount2++;

        audioBufferInfo.offset = offset;
        audioBufferInfo.size = audioExtractor.readSampleData(audioBuf, offset);

        if (videoBufferInfo.size < 0 || audioBufferInfo.size < 0)
        {
            Log.d(TAG, "saw input EOS.");
            sawEOS2 = true;
            audioBufferInfo.size = 0;
        }
        else
        {
            audioBufferInfo.presentationTimeUs = audioExtractor.getSampleTime();
            audioBufferInfo.flags = audioExtractor.getSampleFlags();
            muxer.writeSampleData(audioTrack, audioBuf, audioBufferInfo);
            audioExtractor.advance();


            Log.d(TAG, "Frame (" + frameCount + ") Video PresentationTimeUs:" + videoBufferInfo.presentationTimeUs +" Flags:" + videoBufferInfo.flags +" Size(KB) " + videoBufferInfo.size / 1024);
            Log.d(TAG, "Frame (" + frameCount + ") Audio PresentationTimeUs:" + audioBufferInfo.presentationTimeUs +" Flags:" + audioBufferInfo.flags +" Size(KB) " + audioBufferInfo.size / 1024);

        }
    }

    Toast.makeText(getApplicationContext() , "frame:" + frameCount2 , Toast.LENGTH_SHORT).show();

    muxer.stop();
    muxer.release();


} catch (IOException e) {
    Log.d(TAG, "Mixer Error 1 " + e.getMessage());
} catch (Exception e) {
    Log.d(TAG, "Mixer Error 2 " + e.getMessage());
}

}

感谢这些示例代码:MediaMuxer示例代码-非常完美


7
我复制了你的代码,但它不能正常运行,出现错误:无法将轨道添加到混合器中。该如何解决?谢谢。 - AnswerZhao
1
很棒的例子。再补充一些内容:当尝试混合一个不受支持格式的音频轨道时,我遇到了问题。基本上,如果你要从编码后的视频和音频轨道生成mp4视频,它们必须来自一系列特定的MIME类型(格式)。在音频的情况下,这些必须是MIMETYPE_AUDIO_AMR_NB、MIMETYPE_AUDIO_AMR_WB或MIMETYPE_AUDIO_AAC。否则你会遇到“未知的mime类型 'audio/whatever'”错误。 - SuppressWarnings
@AnandDiamond,你需要的唯一东西就是视频或音频文件的路径。如果你有路径,那么你就可以做到。 - mohamad ali gharat
@mohamadaligharat 我可以做到,MP4文件已经创建,但是我无法播放该文件,出现错误。 - Anand Diamond
@mohamadaligharat 当我尝试运行上述代码时,我遇到了“不支持的mime类型'audio/raw'”错误。我已经单独录制了音频,但我不知道如何将音频转换为特定的编码格式,以便上述代码可以正常工作。如果您能帮忙,请告诉我。 - ayush bagaria
显示剩余16条评论

2
感谢Mohamad Ali Gharat提供的答案,它对我帮助很大。 但是我对代码进行了一些更改才能正常工作, 首先:我将

更改为

videoExtractor.setDataSource(Environment.getExternalStorageDirectory().getPath() + "/Produce.MP4");

SDCard加载视频。 其次:我遇到了错误

videoBufferInfo.flags = videoExtractor.getSampleFlags();

所以我将其更改为

videoBufferInfo.flags = MediaCodec.BUFFER_FLAG_SYNC_FRAME;

使其按照此链接中的说明工作Android MediaMuxer failed to stop


1
private const val MAX_SAMPLE_SIZE = 256 * 1024

fun muxAudioVideo(destination: File, audioSource: File, videoSource: File): Boolean {

    var result : Boolean
    var muxer : MediaMuxer? = null

    try {

        // Set up MediaMuxer for the destination.

        muxer = MediaMuxer(destination.path, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4)

        // Copy the samples from MediaExtractor to MediaMuxer.

        var videoFormat : MediaFormat? = null
        var audioFormat : MediaFormat? = null
    
        var muxerStarted : Boolean = false

        var videoTrackIndex = -1
        var audioTrackIndex = -1

        // extractorVideo

        var extractorVideo = MediaExtractor()

        extractorVideo.setDataSource(videoSource.path)

        val tracks = extractorVideo.trackCount

        for (i in 0 until tracks) {

            val mf = extractorVideo.getTrackFormat(i)

            val mime = mf.getString(MediaFormat.KEY_MIME)
    
            if (mime!!.startsWith("video/")) {

                extractorVideo.selectTrack(i)
                videoFormat = extractorVideo.getTrackFormat(i)

                break
            }
        }


        // extractorAudio

        var extractorAudio = MediaExtractor()

        extractorAudio.setDataSource(audioSource.path)

        for (i in 0 until tracks) {

            val mf = extractorAudio.getTrackFormat(i)

            val mime = mf.getString(MediaFormat.KEY_MIME)

            if (mime!!.startsWith("audio/")) {

                extractorAudio.selectTrack(i)
                audioFormat = extractorAudio.getTrackFormat(i)

                break

            }

        }

        val audioTracks = extractorAudio.trackCount

        // videoTrackIndex

        if (videoTrackIndex == -1) {

            videoTrackIndex = muxer.addTrack(videoFormat!!)

        }

        // audioTrackIndex

        if (audioTrackIndex == -1) {

            audioTrackIndex = muxer.addTrack(audioFormat!!)

        }

        var sawEOS = false
        var sawAudioEOS = false
        val bufferSize = MAX_SAMPLE_SIZE
        val dstBuf = ByteBuffer.allocate(bufferSize)
        val offset = 0
        val bufferInfo = MediaCodec.BufferInfo()

        // start muxer
    
        if (!muxerStarted) {

            muxer.start()

            muxerStarted = true

        }

        // write video
    
        while (!sawEOS) {

            bufferInfo.offset = offset
            bufferInfo.size = extractorVideo.readSampleData(dstBuf, offset)

            if (bufferInfo.size < 0) {
    
                sawEOS = true
                bufferInfo.size = 0

            } else {

                bufferInfo.presentationTimeUs = extractorVideo.sampleTime
                bufferInfo.flags = MediaCodec.BUFFER_FLAG_SYNC_FRAME
                muxer.writeSampleData(videoTrackIndex, dstBuf, bufferInfo)
                extractorVideo.advance()

            }

        }

        // write audio
    
        val audioBuf = ByteBuffer.allocate(bufferSize)

        while (!sawAudioEOS) {

            bufferInfo.offset = offset
            bufferInfo.size = extractorAudio.readSampleData(audioBuf, offset)

            if (bufferInfo.size < 0) {
    
                sawAudioEOS = true
                bufferInfo.size = 0

            } else {

                bufferInfo.presentationTimeUs = extractorAudio.sampleTime
                bufferInfo.flags = MediaCodec.BUFFER_FLAG_SYNC_FRAME
                muxer.writeSampleData(audioTrackIndex, audioBuf, bufferInfo)
                extractorAudio.advance()

            }

        }

        extractorVideo.release()
        extractorAudio.release()

        result = true

    } catch (e: IOException) {

        result = false

    } finally {

        if (muxer != null) {
            muxer.stop()
            muxer.release()
        }

    }

    return result

}

1

谢谢。我找到了一个接近于ffmpeg且更容易的源代码。但是我对MediaMuxer类及其行为没有任何经验,你能帮我提供一个易于使用的答案吗? - mohamad ali gharat

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接