How to mix / overlay two mp3 audio file into one mp3 file (not concatenate)

前端 未结 7 1602
北荒
北荒 2020-12-14 09:23

I want to merge two mp3 files into one mp3 file.for example if 1st file is 1min and 2nd file is 30 sec then the output should be one min. In that one min it should play both

相关标签:
7条回答
  • 2020-12-14 09:40

    1. Post on Audio mixing in Android

    2. Another post on mixing audio in Android

    3. You could leverage Java Sound to mix two audio files

    Example:

    
    // First convert audiofile to audioinputstream
    
    audioInputStream = AudioSystem.getAudioInputStream(soundFile);
    audioInputStream2 = AudioSystem.getAudioInputStream(soundFile2);
    
    // Create one collection list object using arraylist then add all AudioInputStreams
    
    Collection list=new ArrayList();
    list.add(audioInputStream2);
    list.add(audioInputStream);
    
    // Then pass the audioformat and collection list to MixingAudioInputStream constructor
    
    MixingAudioInputStream mixer=new MixingAudioInputStream(audioFormat, list); 
    
    // Finally read data from mixed AudionInputStream and give it to SourceDataLine
    
    nBytesRead =mixer.read(abData, 0,abData.length);
    
    int nBytesWritten = line.write(abData, 0, nBytesRead);
    

    4. Try AudioConcat that has a -m option for mixing

    
    java AudioConcat [ -D ] [ -c ] | [ -m ] | [ -f ] -o outputfile inputfile ...
    
    Parameters. 
    
    -c
    selects concatenation mode
    
    -m
    selects mixing mode
    
    -f
    selects float mixing mode
    
    -o outputfile
    The filename of the output file
    
    inputfile
    the name(s) of input file(s)
    

    5. You could use ffmpeg android wrapper using a syntax and approach as explained here

    0 讨论(0)
  • 2020-12-14 09:41

    To merge (overlap) two sound files, you can use This FFMPEG library.

    Here is the Documentation

    In their sample you can just enter the command you want. So lets talk about the command that we need.

    -i [FISRST_FILE_PATH] -i [SECOND_FILE_PATH] -filter_complex amerge -ac 2 -c:a libmp3lame -q:a 4 [OUTPUT_FILE_PATH]
    

    For first and second file paths, you will get the absolute path of the sound file. 1- If it is on storage then it is a sub folder for Environment.getExternalStorageDirectory().getAbsolutePath()

    2- If it is assets so it should be a sub folder for file:///android_asset/

    For the output path, Make sure to add the extension ex.

    String path = Environment.getExternalStorageDirectory().getAbsolutePath() + "/File Name.mp3"
    
    0 讨论(0)
  • 2020-12-14 09:45

    This guy used the JLayer library in a project quite similar to yours. He also gives you a guide on how to integrate that library in your android application directly recompiling the jar.

    Paraphrasing his code it is very easy to accomplish your task:

    public static byte[] decode(String path, int startMs, int maxMs) 
      throws IOException, com.mindtherobot.libs.mpg.DecoderException {
      ByteArrayOutputStream outStream = new ByteArrayOutputStream(1024);
    
      float totalMs = 0;
      boolean seeking = true;
    
      File file = new File(path);
      InputStream inputStream = new BufferedInputStream(new FileInputStream(file), 8 * 1024);
      try {
        Bitstream bitstream = new Bitstream(inputStream);
        Decoder decoder = new Decoder();
    
        boolean done = false;
        while (! done) {
          Header frameHeader = bitstream.readFrame();
          if (frameHeader == null) {
            done = true;
          } else {
            totalMs += frameHeader.ms_per_frame();
    
            if (totalMs >= startMs) {
              seeking = false;
            }
    
            if (! seeking) {
              SampleBuffer output = (SampleBuffer) decoder.decodeFrame(frameHeader, bitstream);
    
              if (output.getSampleFrequency() != 44100
                  || output.getChannelCount() != 2) {
                throw new com.mindtherobot.libs.mpg.DecoderException("mono or non-44100 MP3 not supported");
              }
    
              short[] pcm = output.getBuffer();
              for (short s : pcm) {
                outStream.write(s & 0xff);
                outStream.write((s >> 8 ) & 0xff);
              }
            }
    
            if (totalMs >= (startMs + maxMs)) {
              done = true;
            }
          }
          bitstream.closeFrame();
        }
    
        return outStream.toByteArray();
      } catch (BitstreamException e) {
        throw new IOException("Bitstream error: " + e);
      } catch (DecoderException e) {
        Log.w(TAG, "Decoder error", e);
        throw new com.mindtherobot.libs.mpg.DecoderException(e);
      } finally {
        IOUtils.safeClose(inputStream);     
      }
    }
    
    public static byte[] mix(String path1, String path2) {
        byte[] pcm1 = decode(path1, 0, 60000); 
        byte[] pcm2 = decode(path2, 0, 60000);
        int len1=pcm1.length; 
        int len2=pcm2.length;
        byte[] pcmL; 
        byte[] pcmS;
        int lenL; // length of the longest
        int lenS; // length of the shortest
        if (len2>len1) {
            lenL = len1;
            pcmL = pcm1;
            lenS = len2;                
            pcmS = pcm2;
        } else {
            lenL = len2;
            pcmL = pcm2;
            lenS = len1;                
            pcmS = pcm1;
        } 
        for (int idx = 0; idx < lenL; idx++) {
            int sample;
            if (idx >= lenS) {
                sample = pcmL[idx];
            } else {
                sample = pcmL[idx] + pcmS[idx];
            }
            sample=(int)(sample*.71);
            if (sample>127) sample=127;
            if (sample<-128) sample=-128;
            pcmL[idx] = (byte) sample;
        }
        return pcmL;
    }
    

    Note that I added attenuation and clipping in the last rows: you always have to do both when mixing two waveforms. If you don't have memory/time requirements you can make an int[] of the sum of the samples and evaluate what is the best attenuation to avoid clipping.

    0 讨论(0)
  • 2020-12-14 09:55

    First of all, in order to mix two audio files you need to manipulate their raw representation; since an MP3 file is compressed, you don't have a direct access to the signal's raw representation. You need to decode the compressed MP3 stream in order to "understand" the wave form of your audio signals and then you will be able to mix them.

    Thus, in order to mix two compressed audio file into a single compressed audio file, the following steps are required:

    1. decode the compressed file using a decoder to obtain the raw data (NO PUBLIC SYSTEM API available for this, you need to do it manually!).
    2. mix the two raw uncompressed data streams (applying audio clipping if necessary). For this, you need to consider the raw data format obtained with your decoder (PCM)
    3. encode the raw mixed data into a compressed MP3 file (as per the decoder, you need to do it manually using an encoder)

    More info aboud MP3 decoders can be found here.

    0 讨论(0)
  • 2020-12-14 10:03

    I didn't get any fine solution.but we can do some trick here.. :) You can assign both mp3 files to two different MediaPlayer object.then play both files at a time with a button.compare both mp3 files to find the longest duration.after that Use a AudioReorder to record to that duration. it will solve your problem..I know its not a right way but hope it will help you.. :)

    0 讨论(0)
  • 2020-12-14 10:04

    I have not done it in Android but I had done it using Adobe flex. I guess the logic remains the same. I followed the following steps:

    • I extracted both the mp3s into two byte arrays. (song1ByteArray, song2ByteArray)
    • Find out the bigger byte array. (Let's say song1ByteArray is the larger one).
    • Create a function which returns the mixed byte array.

      private ByteArray mix2Songs(ByteArray song1ByteArray, ByteArray song2ByteArray){
      int arrLength=song1ByteArray.length; 
      for(int i=0;i<arrLength;i+=8){ // here if you see we are incrementing the length by 8 because a sterio sound has both left and right channels 4 bytes for left +4 bytes for right.
          // read left and right channel values for the first song
          float source1_L=song1ByteArray.readFloat();// I'm not sure if readFloat() function exists in android but there will be an equivalant one.
          float source1_R=song1ByteArray.readFloat(); 
          float source2_L=0;
          float source2_R=0;
          if(song2ByteArray.bytesAvailable>0){
              source2_L=song1ByteArray.readFloat();//left channel of audio song2ByteArray
              source2_R=song1ByteArray.readFloat(); //right channel of audio song2ByteArray
          }
          returnResultArr.writeFloat((source_1_L+source_2_L)/2); // average value of the source 1 and 2 left channel
          returnResultArr.writeFloat((source_1_R+source_2_R)/2); // average value of the source 1 and 2 right channel
      }
      return returnResultArr;
      }
      
    0 讨论(0)
提交回复
热议问题