mediarecorder

What does Android's getMaxAmplitude() function for the MediaRecorder actually give me?

假如想象 提交于 2019-11-26 22:45:36
问题 The Android MediaRecorder has a function .getMaxAmplitude(); which, as the API tells me, "Returns the maximum absolute amplitude that was sampled since the last call to this method." but I can't find what amplitude this is? Is it in pascal or watts? I have found on several pages on the web that you can calculate a value closely corelated to decibels using (as suggested here). double db = (20 * Math.log10(amplitude / REFERENCE)); which would let me assume that the returned value is in some

MediaRecorder start failed: -38

99封情书 提交于 2019-11-26 19:02:02
i searched to check if this question is no dup , i see some has no answer and others did not help. this is my code : private void startRecording() { mRecorder = new MediaRecorder(); mRecorder.setAudioSource(MediaRecorder.AudioSource.MIC); mRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); mRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); mFileName = Environment.getExternalStorageDirectory().getAbsolutePath(); mFileName += "/recordedHeckPost_.3gp"; mRecorder.setOutputFile(mFileName); try { mRecorder.prepare(); //Thread.sleep(2000); mRecorder.start(); } catch

MediaRecorder and VideoSource.SURFACE, stop failed: -1007 (a serious Android bug)

佐手、 提交于 2019-11-26 18:01:23
I'm trying to record MediaRecorder without using Camera instance but using Surface video source (yes it's possible, but it turned out that it's not that perfect) - mediaRecorder.setVideoSource(MediaRecorder.VideoSource.SURFACE); I just write what the issue: Next code works only on some devices and works temporary on some devices after a recent device rebooting or doesn't work at all If it doesn't work ok MediaRecorder.stop() method fails with the next error E/MediaRecorder: stop failed: -1007 W/System.err: java.lang.RuntimeException: stop failed. at android.media.MediaRecorder.stop(Native

MediaStream Capture Canvas and Audio Simultaneously

跟風遠走 提交于 2019-11-26 17:46:30
I'm working on a project in which I'd like to: Load a video js and display it on the canvas. Use filters to alter the appearance of the canvas (and therefore the video). Use the MediaStream captureStream() method and a MediaRecorder object to record the surface of the canvas and the audio of the original video. Play the stream of both the canvas and the audio in an HTML video element. I've been able to display the canvas recording in a video element by tweaking this WebRTC demo code: https://webrtc.github.io/samples/src/content/capture/canvas-record/ That said, I can't figure out how to record

MediaRecorder crashes on start

霸气de小男生 提交于 2019-11-26 17:03:39
问题 i've searched many topics but no straight answer. I have this code : recorder = new MediaRecorder(); recorder.setAudioSource(MediaRecorder.AudioSource.MIC); recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); recorder.setOutputFile(mFileName); recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); if(!mStartRecording) { btn.setText("Stop Recording"); try { recorder.prepare(); } catch (IOException e) { e.printStackTrace(); } recorder.start(); mStartRecording = true; } else {

Pause & Resume with Android MediaRecorder (API level < 24)

不打扰是莪最后的温柔 提交于 2019-11-26 16:47:30
问题 While using MediaRecorder , we don't have pause/resume for API level below 24. So there can be a way to do this is: On pause event stop the recorder and create the recorded file. And on resume start recording again and create another file and keep doing so until user presses stop. And at last merge all files. Many people asked this question on SO, but couldn't find anyway to solve this. People talk about creating multiple media files by stopping recording on pause action and restarting on

AudioSource.VOICE_CALL not working in android 4.0 but working in android 2.3

非 Y 不嫁゛ 提交于 2019-11-26 16:09:21
问题 VOICE_CALL, VOICE_DOWNLINK ,VOICE_UPLINK not working on android 4.0 but working on android 2.3 (Actual Device),I have uploaded a dummy project to record all outgoing call so that you can see it for your self http://www.mediafire.com/?img6dg5y9ri5c7rrtcajwc5ycgpo2nf you just have to change audioSource = MediaRecorder.AudioSource.MIC; to audioSource = MediaRecorder.AudioSource.VOICE_CALL; on line 118 in TService.java If you come across any error, tell me. Any suggestion related to it will be

MediaRecorder issue on Android Lollipop

ぃ、小莉子 提交于 2019-11-26 13:17:56
问题 I'm testing libstreaming on new Android Lollipop , and this code that worked on previous release, seems to launch exception. try { mMediaRecorder = new MediaRecorder(); mMediaRecorder.setCamera(mCamera); mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA); mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); mMediaRecorder.setVideoEncoder(mVideoEncoder); mMediaRecorder.setPreviewDisplay(mSurfaceView.getHolder().getSurface()); mMediaRecorder.setVideoSize

Android can&#39;t record video with Front Facing Camera, MediaRecorder start failed: -19

时间秒杀一切 提交于 2019-11-26 11:06:23
问题 I have two different code bases with the same problem. The first one is code copied straight from developer.android.com here: http://developer.android.com/guide/topics/media/camera.html#custom-camera The second one is this code: http://android-er.blogspot.com.au/2011/10/simple-exercise-of-video-capture-using.html Both work fine with the normal rear camera, but as soon as I try to use the front facing camera I get the error. This happens on the following devices: Nexus S 4.1.2 Galaxy Nexus 4.1

How can I add predefined length to audio recorded from MediaRecorder in Chrome?

家住魔仙堡 提交于 2019-11-26 09:09:21
问题 I am in the process of replacing RecordRTC with the built in MediaRecorder for recording audio in Chrome. The recorded audio is then played in the program with audio api. I am having trouble getting the audio.duration property to work. It says If the video (audio) is streamed and has no predefined length, \"Inf\" (Infinity) is returned. With RecordRTC, I had to use ffmpeg_asm.js to convert the audio from wav to ogg. My guess is somewhere in the process RecordRTC sets the predefined audio