由于粗神经网络的误差传递函数不可微,所以采用遗传算法来训练粗神经网络。
Because the error transfer function of rough neural network is not differentiable, genetic algorithms are applied for training the network.
极大极小问题是一类不可微优化问题,熵函数法是求解这类问题的一种有效算法。
Minimax problem is a sort of non-differentiable optimization problem and the entropy function method provides a efficient approach to solve such kind of problems.
由于优化问题可能是不连续的、不可微的甚或是没有函数解析式的,传统经典的无约束优化方法在应用时会受到限制。
Because the optimization problem may be discontinuous and non-differentiable even has no objective function, the traditional optimization methods are unable to tackle with it.
接着,利用函数的上次微分构造了不可微向量优化问题(VP)的广义对偶模型,并且在适当的弱凸性条件下建立了弱对偶定理。
Finally, the generalized dual model of the problem (VP) is presented with the help of upper subdifferential of function, and a weak duality theorem is given.
接着,利用函数的上次微分构造了不可微向量优化问题(VP)的广义对偶模型,并且在适当的弱凸性条件下建立了弱对偶定理。
Finally, the generalized dual model of the problem (VP) is presented with the help of upper subdifferential of function, and a weak duality theorem is given.
应用推荐