Expressiveness and an introduction to neural networks
#
This lab has two goals:
It will give you a very brief introduction to Keras, a Python package for building and training neural networks, and
Allow you to gain some intuition about how simple dense neural networks can be structured to approximate functions \(f:\mathbb{R}\rightarrow \mathbb{R}\).
Below is a sequence of functions in increasing order of difficulty. For each, your task is to define a neural network representation with one hidden layer. For simplicity, use the sigmoid activation function on the hidden layer neurons and no activation function on the output neuron.
A translated sigmoid. It should be centered at \(x=2\) and be \(4\) units in height.
A translated sigmoid, but with a steeper ramp. Again, it should be centered at \(x=2\) and be \(4\) units in height.
What is the relationship between the weight and bias that keeps the ramp centered at \(x=2\)?
Another sigmoid. It should be centered at \(x=4\) and be \(4\) units in height, but face in the opposite direction.
A bump function. It should be \(4\) units in height.
A two-bump function. One bump should be \(4\) units in height, the other \(10\).
A two-adjacent-bumps function. One bump should be \(4\) units in height, the other \(10\).
Exercise: Suppose that \(f: [0,1] \rightarrow \mathbb{R} \) is a continuous function. Based on your work above, how would you go about designing a one-hidden-layer neural network that approximates \(f\)?
Exercise: Suppose that \(f(x) = e^{x-1}\) as below. Find a one-hidden-layer neural network approximation of \(f\). Use the \(\text{ReLU}(x)\) activation function for neurons in the hidden layer. Graph your results.