Quiz 2 - Question 2
You trained the weights of a two-layer neural network WITHOUT a non-linear activation function that predicts either the token “rice” (positive class) or the token “cake” (negative class) from a two-dimensional prompt embedding. The hidden layer also has a dimension of two, i.e., it consists of two neurons. The weights of the first layer are represented by the vectors w11=(1,1) and w12 = (-2,2), for the first and second neuron, respectively. The weights of the second layer are represented by the vector w21 = (3,1). The bias term for both layers is 0.
As you have seen, multi-layer neural networks without non-linear activation functions can be reduced to a single-layer neural network. What is the weight vector v of a single-layer neural network that leads to the same predictions as the multi-layer neural-network defined above?
Este exercício faz parte do curso
Google DeepMind: Design And Train Neural Networks
Exercício interativo prático
Transforme a teoria em ação com um de nossos exercícios interativos
Começar o exercício