What would be an idiomatic F# way to scale a list of (n-tuples or list) with another list, also arrays?

跟風遠走 提交于 2019-12-22 09:50:02

问题


Given:

let weights = [0.5;0.4;0.3]
let X = [[2;3;4];[7;3;2];[5;3;6]]

what I want is:
wX = [(0.5)*[2;3;4];(0.4)*[7;3;2];(0.3)*[5;3;6]]
would like to know an elegant way to do this with lists as well as with arrays. Additional optimization information is welcome


回答1:


You write about a list of lists, but your code shows a list of tuples. Taking the liberty to adjust for that, a solution would be

let weights = [0.5;0.4;0.3]
let X = [[2;3;4];[7;3;2];[5;3;6]]
X
|> List.map2 (fun w x -> 
    x 
    |> List.map (fun xi -> 
        (float xi) * w
    )
) weights

Depending on how comfortable you are with the syntax, you may prefer a oneliner like

List.map2 (fun w x -> List.map (float >> (*) w) x) weights X

The same library functions exist for sequences (Seq.map2, Seq.map) and arrays (in the Array module).




回答2:


This is much more than an answer to the specific question but after a chat in the comments and learning that the question was specifically a part of a neural network in F# I am posting this which covers the question and implements the feedforward part of a neural network. It makes use of MathNet Numerics

This code is an F# translation of part of the Python code from Neural Networks and Deep Learning.

Python

def backprop(self, x, y):
    """Return a tuple ``(nabla_b, nabla_w)`` representing the
    gradient for the cost function C_x.  ``nabla_b`` and
    ``nabla_w`` are layer-by-layer lists of numpy arrays, similar
    to ``self.biases`` and ``self.weights``."""
    nabla_b = [np.zeros(b.shape) for b in self.biases]
    nabla_w = [np.zeros(w.shape) for w in self.weights]
    # feedforward
    activation = x
    activations = [x] # list to store all the activations, layer by layer
    zs = [] # list to store all the z vectors, layer by layer
    for b, w in zip(self.biases, self.weights):
        z = np.dot(w, activation)+b
        zs.append(z)
        activation = sigmoid(z)
        activations.append(activation)

F#

module NeuralNetwork1 =

    //# Third-party libraries
    open MathNet.Numerics.Distributions         // Normal.Sample
    open MathNet.Numerics.LinearAlgebra         // Matrix

    type Network(sizes : int array) = 

        let mutable (_biases : Matrix<double> list) = []
        let mutable (_weights : Matrix<double> list) = []    

        member __.Biases
            with get() = _biases
            and set value = 
                _biases <- value
        member __.Weights
            with get() = _weights
            and set value = 
                _weights <- value

        member __.Backprop (x : Matrix<double>) (y : Matrix<double>) =
            // Note: There is a separate member for feedforward. This one is only used within Backprop 
            // Note: In the text layers are numbered from 1 to n   with 1 being the input and n   being the output
            //       In the code layers are numbered from 0 to n-1 with 0 being the input and n-1 being the output
            //       Layers
            //         1     2     3    Text
            //         0     1     2    Code
            //       784 -> 30 -> 10
            let feedforward () : (Matrix<double> list * Matrix<double> list) =
                let (bw : (Matrix<double> * Matrix<double>) list) = List.zip __.Biases __.Weights
                let rec feedfowardInner layer activation zs activations =
                    match layer with
                    | x when x < (__.NumLayers - 1) ->
                        let (bias, weight) = bw.[layer]
                        let z = weight * activation + bias
                        let activation = __.Sigmoid z
                        feedfowardInner (layer + 1) activation (z :: zs) (activation :: activations)
                    | _ -> 
                        // Normally with recursive functions that build list for returning
                        // the final list(s) would be reversed before returning.
                        // However since the returned list will be accessed in reverse order
                        // for the backpropagation step, we leave them in the reverse order.
                        (zs, activations)
                feedfowardInner 0 x [] [x]

In weight * activation * is an overloaded operator operating on Matrix<double>

Related back to your example data and using MathNet Numerics Arithmetics

let weights = [0.5;0.4;0.3]
let X = [[2;3;4];[7;3;2];[5;3;6]]

first the values for X need to be converted to float

let x1 = [[2.0;3.0;4.0];[7.0;3.0;2.0];[5.0;3;0;6;0]]

Now notice that x1 is a matrix and weights is a vector

so we can just multiply

 let wx1 = weights * x1

Since the way I validated the code was a bit more than most I will explain it so that you don't have doubts to its validity.

When working with Neural Networks and in particular mini-batches, the starting numbers for the weights and biases are random and the generation of the mini-batches is also done randomly.

I know the original Python code was valid and I was able to run it successfully and get the same results as indicated in the book, meaning that the initial successes were within a couple of percent of the book and the graphs of the success were the same. I did this for several runs and several configurations of the neural network as discussed in the book. Then I ran the F# code and achieved the same graphs.

I also copied the starting random number sets from the Python code into the F# code so that while the data generated was random, both the Python and F# code used the same starting numbers, of which there are thousands. I then single stepped both the Python and F# code to verify that each individual function was returning a comparable float value, e.g. I put a break point on each line and made sure I checked each one. This actually took a few days because I had to write export and import code and massage the data from Python to F#.

See: How to determine type of nested data structures in Python?

I also tried a variation where I replaced the F# list with Linked list, but found no increase in speed, e.g. LinkedList<Matrix<double>>. Was an interesting exercise.




回答3:


If I understand correctly,

let wX = weights |> List.map (fun w ->
    X |> List.map (fun (a, b, c) ->
        w * float a,
        w * float b,
        w * float c))



回答4:


This is an alternate way to achieve this using Math.Net: https://numerics.mathdotnet.com/Matrix.html#Arithmetics



来源:https://stackoverflow.com/questions/41991006/what-would-be-an-idiomatic-f-way-to-scale-a-list-of-n-tuples-or-list-with-ano

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!