问题
a couple of days before i asked about filling space between two curve lines with gradient. Since that i've changed a lots of in my code - now i'm using shaders. The result is almost as i need. BUT! Again i have a problem with using several OpenGL views simultaneously in one controller but they must be absolutely independent between each other. So, thanks to unbelievable tutorial from raywenderlich.com and code example from bradley i have next situation:
- two custom objects of class SoundWaveView.swift (improved from OpenGLView.swift from bradley);
- vertex/fragment shaders for openglView object;
- controller where i create and add two SoundWaveView objects (with the same context!!!!!):
c
override func viewDidLoad() {
super.viewDidLoad()
let context = EAGLContext(API: EAGLRenderingAPI.OpenGLES2)
let viewPos1: CGRect = CGRectMake(0, 150, UIScreen.mainScreen().bounds.size.width, 150);
let view1 = SoundWaveView(frame: viewPos1, fileName: "Personal Jesus", fileExtention: "mp3", c: context)
view1.backgroundColor = UIColor.grayColor()
self.view.addSubview(view1)
let viewPos2: CGRect = CGRectMake(0, 350, UIScreen.mainScreen().bounds.size.width, 150);
let view2 = SoundWaveView(frame: viewPos2, fileName: "Imagine", fileExtention: "mp3", c: context)
view2.backgroundColor = UIColor.blackColor()
self.view.addSubview(view2)
}
Every SoundWaveView-object compiles shaders as in tutorials above:
func compileShaders() {
let vertexShader = compileShader("SimpleVertex", shaderType: GLenum(GL_VERTEX_SHADER))
let fragmentShader = compileShader("SimpleFragment", shaderType: GLenum(GL_FRAGMENT_SHADER))
let programHandle: GLuint = glCreateProgram()
glAttachShader(programHandle, vertexShader)
glAttachShader(programHandle, fragmentShader)
glLinkProgram(programHandle)
var linkSuccess: GLint = GLint()
glGetProgramiv(programHandle, GLenum(GL_LINK_STATUS), &linkSuccess)
if (linkSuccess == GL_FALSE) {
var message = [CChar](count: 256, repeatedValue: CChar(0))
var length = GLsizei(0)
glGetProgramInfoLog(programHandle, 256, &length, &message)
print("Failed to create shader program! : \( String(UTF8String: message) ) ")
exit(1);
}
glUseProgram(programHandle)
. . . . .
shaderPeakId = glGetUniformLocation(programHandle, "peakId")
shaderWaveAmp = glGetUniformLocation(programHandle, "waveAmp");
glEnableVertexAttribArray(positionSlot)
glEnableVertexAttribArray(colorSlot)
glUniform1f(shaderSceneWidth!, Float(self.frame.size.width));
glUniform1f(shaderSceneHeight!, Float(self.frame.size.height));
glUniform1i(shaderPeaksTotal!, GLint(total));
glUniform1i(shaderPeakId!, GLint(position));
}
For rendering audio waves a need to pass to the shader values as A) amplitude (i get them from avaudioplayer - it work perfect and for two objects i create in controller i have different amplitudes for two different tracks) and B) wave number (aka peakId/position).
So for the first created object(grey) there position is equal 1 and this wave must be red and placed in left part of the screen. For the second object (black) position is 3, wave is blue and placed in right part. /position 2 is for center at screen, not use now/
Every audio update causes update for opengl:
var waveAmp :Float = 0
. . . . .
func updateMeters(){
player!.updateMeters()
var normalizedValue = normalizedPowerLevelFromDecibels(player!.averagePowerForChannel(0))
waveAmp = normalizedValue!
}
func render(displayLink: CADisplayLink?) {
glBlendFunc(GLenum(GL_ONE), GLenum(GL_ONE_MINUS_SRC_ALPHA))
glEnable(GLenum(GL_BLEND))
// glClearColor(0, 104.0/255.0, 55.0/255.0, 1.0)
glClearColor(0, 0, 0, 0)
glClear(GLenum(GL_COLOR_BUFFER_BIT) | GLenum(GL_DEPTH_BUFFER_BIT))
glEnable(GLenum(GL_DEPTH_TEST))
glViewport(0, 0, GLsizei(self.frame.size.width), GLsizei(self.frame.size.height))
glBindBuffer(GLenum(GL_ARRAY_BUFFER), vertexBuffer)
glBindBuffer(GLenum(GL_ELEMENT_ARRAY_BUFFER), indexBuffer)
glVertexAttribPointer(positionSlot, 3, GLenum(GL_FLOAT), GLboolean(UInt8(GL_FALSE)), GLsizei(sizeof(Vertex)), nil)
glVertexAttribPointer(colorSlot, 4, GLenum(GL_FLOAT), GLboolean(UInt8(GL_FALSE)), GLsizei(sizeof(Vertex)), UnsafePointer<Void>(bitPattern: (sizeof(Float) * 3)))
glDrawElements(GLenum(GL_TRIANGLE_FAN), GLsizei(Indices.size()), GLenum(GL_UNSIGNED_BYTE), nil)
glVertexAttribPointer(positionSlot, 3, GLenum(GL_FLOAT), GLboolean(UInt8(GL_FALSE)), GLsizei(sizeof(Vertex)), nil);
glVertexAttribPointer(colorSlot, 4, GLenum(GL_FLOAT), GLboolean(UInt8(GL_FALSE)), GLsizei(sizeof(Vertex)), UnsafePointer<Void>(bitPattern: (sizeof(Float) * 3)));
context.presentRenderbuffer(Int(GL_RENDERBUFFER))
print("position=\(position) waveAmp=\(waveAmp)")
glUniform1i(shaderPeakId!, GLint(position));
glUniform1f(shaderWaveAmp!, waveAmp);
}
Log with print()-function:
position=1 waveAmp=0.209313
position=3 waveAmp=0.47332
position=1 waveAmp=0.207556
position=3 waveAmp=0.446235
position=1 waveAmp=0.20769
position=3 waveAmp=0.446235
position=1 waveAmp=0.246206
position=3 waveAmp=0.430118
I expect that i will have two rectangles - one grey with red wave on left and one black with blue line on right. BUT i have next trouble! Rendering take place in one (last) view and changes sequentially:
Please anyone! How to use two glViews simultaneously? Or maybe i need to use two shaders in one view?
I want to finish this audio wave task and share them with everyone. But i still don't understand this problem
回答1:
Looking in the code for the SoundWaveView class you posted, it looks like that view handles initializing the context, the framebuffer for the context, and more for you. Since you send the same EAGLContext to both views, that means the same context is being initialized, then likely re-initialized by the second view. That doesn't seem right to me. It might actually work fine if you make just a separate context for the other view, and the GPU will just context-switch when rendering between these two views, and treat them as if they were separate apps.
But that's not how an application this simple should be designed. If you want two objects drawn on the screen at the same time, that certainly doesn't mean you need two separate views. You would have to build a system that handles taking the information from each waveform and converting it into the appropriate draw calls automatically. This would act as a mini draw engine for your application. With this approach, you could write further support for even cooler techniques and drawings... maybe more waveforms, different textures/colors for each of them, cool effects, etc.
来源:https://stackoverflow.com/questions/39638481/ios-opengl-several-views-simultaneously