keras combining two losses with adjustable weights

二次信任 提交于 2019-11-30 09:48:37

It seems that propagating the "same loss" into both branches will not take effect, unless alpha is dependent on both branches. If alpha is not variable depending on both branches, then part of the loss will be just constant to one branch.

So, in this case, just compile the model with the two losses separate and add the weights to the compile method:

model.compile(optmizer='someOptimizer',loss=[loss1,loss2],loss_weights=[alpha,1-alpha])

Compile again when you need alpha to change.


But if indeed alpha is dependent on both branches, then you need to concatenate the results and calculate alpha's value:

singleOut = Concatenate()([x1,x2])

And a custom loss function:

def weightedLoss(yTrue,yPred):
    x1True = yTrue[0]
    x2True = yTrue[1:]

    x1Pred = yPred[0]
    x2Pred = yPred[1:]

    #calculate alpha somehow with keras backend functions

    return (alpha*(someLoss(x1True,x1Pred)) + ((1-alpha)*(someLoss(x2True,x2Pred))

Compile with this function:

model.compile(loss=weightedLoss, optimizer=....)
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!