# Simple linear regression with TensorFlow

To get started with TensorFlow and machine learning we want to show you a simple example for how to do linear regression. Using this simple example, we can start exploring the TensorFlow APIs and get a feeling for machine learning and also learn techniques which can be used to create more complex applications later on.

The task will be to find a line function `y = m * x + b`

for a given set of data points, which has the minimum mean square distance to all the given data points.

Each data point will provide a value for `x`

and a value for `y`

. This means the model which we will create and train has to be able to find values for `m`

and `b`

, so that the resulting line function has a minimal distance to all data points.

If we plot an example of the given data points it can look something like this:

Each point in the image above will be used as input to train our model.

## Preparation

This tutorial assumes you already have Python installed. In case you have not installed it head over to https://www.python.org/ and install Python 3.6.6 (Python 3.7 is not supported by TensorFlow at the moment).

First of all you need to install TensorFlow. To do this open up your console and enter

`pip install tensorflow`

or

`pip install tensorflow-gpu`

(If you want to use GPU support you need a NVIDIA GPU and CUDA installed on your machine, see https://www.tensorflow.org/install/gpu)

After installing TensorFlow you can verify your installation by starting the python console and do a simple import of TensorFlow

Open Python console by entering `python`

in your Bash/Cmd/Powershell

Then enter:

```
import tensorflow as tf
tf.__version__
```

You should see the currently installed version of TensorFlow printed out. At the time this tutorial was created the current version was `'1.11.0'`

In addition to TensorFlow we also need Numpy and Matplotlib for this tutorial. Like we did with TensorFlow open your console and enter:

`pip install numpy`

`pip install matplotlib`

Numpy will be used for creating our sample data and Matplotlib will be used to display the generated sample data and the resulting line when using our model.

## Generate data

As our first step, we want to generate the necessary data on which we want to perform a linear regression. Create a new project in your preferred IDE (PyCharm, VsCode, Jupyter Notebook). We used Jupyter and the inline plot functionality of Matplotlib. Therefore all plots are displayed without using the *show* function. In case you are using a different IDE consider also calling *show* on the created plots.

Import the needed libraries:

```
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
# if you are using Juypter and want to see the plot inlined
%matplotlib inline
```

Generate values for `x`

and `y`

using Numpy

```
x = np.linspace(0, 200, 100, dtype=np.float32) + np.random.uniform(-100, 100, size=100).astype(np.float32)
y = np.linspace(0, 200, 100, dtype=np.float32) + np.random.uniform(-100, 100, size=100).astype(np.float32)
```

This will create 100 values for `x`

and 100 values for `y`

between -100 and 300.

Using Matplotlib you can display the values as seen in the image above `plt.plot(x, y, '*')`

. Your plot can differ in comparison to the image shown above, because of the random values added.

## Train a linear regression model

The easiest way to train a linear regression model is to use the already predefined API of TensorFlow. For this kind of task TensorFlow is providing a class named **LinearRegressor**. You have to give it a set of input and expected values.

In our example the input values are all of our `x`

values and the expected values are all of our `y`

values. The `y`

values are the expected values, because our line function is `y = m * x + b`

and we want to learn how to calculate `y`

for a given `x`

. To be able to calculate `y`

from `x`

our model has to learn the correct values for `m`

and `b`

that produce the minimum mean squared error among our given training data set.

If you provide an input value and a known result value, this kind of training is called **supervised learning**. We input a value `x`

into our model for which it will try to calculate a resulting `y`

. By knowing the correct result `y`

in the training process we are able to correct the model in case it is calculating a wrong value.

To setup the training process we create a so called input function. This function will provide `x`

and `y`

to our LinearRegressor. TensorFlow has an easy to use API for creating an input function out of a Numpy array.

`input_fn = tf.estimator.inputs.numpy_input_fn({"x": x}, y, shuffle=True)`

As the first parameter of this function we provide our `x`

values within a dictionary. It has to be in a dictionary, because it is prossible to provide multiple input values into a LinearRegressor to train more complex models. For example this could be calculating housing prices by given it a set of features like size, age, etc. In our example `x`

is the only input we need to calculate `y`

. We are naming our input `x`

with **"x"** in the input dictionary, so it can be identified later on. The second parameter are our expected values, in our case `y`

. The last parameter is determining if the input parameters should be shuffled. This is normally a good idea when training complex models, because it shows the model the variations in the training data early on. The model does not optimize to a subset of the data, but to a random sample chosen from the whole dataset. Therefore we are setting shuffle to True.

Next we want to tell the generic LinearRegressor API what the input is and how it can get to it. To do this we create a numeric feature column. This tells the LinearRegressor the input will be a number and it can be identfied with the key **"x"**.

`column_x = tf.feature_column.numeric_column("x")`

Now that we have our input function and a mapping of `x`

to a feature column we can create our LinearRegressor

`regressor = tf.estimator.LinearRegressor([column_x], model_dir="/tmp/tutorial/linear_regression")`

and start training our model

```
for i in range(20):
print("Running epoch", i+1)
regressor.train(input_fn)
```

As mentioned above a LinearRegressor can have multiple inputs. Therefore we provide our input column as an array. The *model_dir* is where our trained model will be saved locally. This can be changed to whatever folder you would like your model to reside in.

After creating a LinearRegressor we start the training by calling *train* on it and passing it our input function as parameter. Normally it takes longer than one training iteration (*epoch*), to get good results. That is why we are doing the training 20 times. Meaning we are showing our generated data 20 times to our model. On each iteration the values `m`

and `b`

will be adjusted by TensorFlow, producing a more accurate result.

While the training is running you can see the output printed to your console. The generated output looks like this, even though the concrete values may vary for you because of the random input data.

```
Running epoch 1
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tutorial/linear_regression\model.ckpt.
INFO:tensorflow:loss = 1541075.6, step = 0
INFO:tensorflow:Saving checkpoints for 1 into /tmp/tutorial/linear_regression\model.ckpt.
INFO:tensorflow:Loss for final step: 1541075.6.
Running epoch 2
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tutorial/linear_regression\model.ckpt-1
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Saving checkpoints for 1 into /tmp/tutorial/linear_regression\model.ckpt.
INFO:tensorflow:loss = 1094541.0, step = 1
INFO:tensorflow:Saving checkpoints for 2 into /tmp/tutorial/linear_regression\model.ckpt.
INFO:tensorflow:Loss for final step: 1094541.0.
Running epoch 3
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tutorial/linear_regression\model.ckpt-2
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Saving checkpoints for 2 into /tmp/tutorial/linear_regression\model.ckpt.
INFO:tensorflow:loss = 892589.5, step = 2
INFO:tensorflow:Saving checkpoints for 3 into /tmp/tutorial/linear_regression\model.ckpt.
INFO:tensorflow:Loss for final step: 892589.5.
...
```

As you can see each epoch has a log statement showing us the current loss. This is representing the calculated minimum mean squared distance our model is currently producing. As the training is progressing this number has to decrease otherwise the training is not working correctly. In our case running the training for 20 epochs on the training data produces a loss of *556745*. Note that even though a smaller loss is always better it can never reach 0 given our current input and expected values.

## Verify our model

Now that we successfully have trained our model, we want to verify that the training we did has produced the desired result. To do this we create a new set of test input values for `x`

and let our model calculate the corresponding `y`

value.

`x_test = np.linspace(0, 200, 2, dtype=np.float32) + np.random.uniform(-100, 100, size=2)`

We also have to create a new input function using our new `x_test`

. The difference this time is, that we don't provide any values for `y`

, because we now want to calculate them using our trained model.

`test_input_fn = tf.estimator.inputs.numpy_input_fn({"x": x_test}, shuffle=False)`

The feature column is the same as we used before, therefore we are still using **"x"** as key in our input dictionary.

We perform the calculation of the `y`

values by calling *predict* and passing it our new input function.

`result_iterator = regressor.predict(test_input_fn)`

The result is an iterator. The iterator does not contain the values of `y`

yet. These values will be calculated lazily by TensorFlow only when they are requested.

To get the actual values, we are using a list comprehension to iterate over all available results and putting them into a simple array.

`y_pred = [predictions["predictions"] for predictions in result_iterator]`

The value within the iterator is a dictionary containing the key **"predictions"**. Similar to passing in the input data as a dictionary, the result will be returned as a dictionary as well by the LinearRegressor.

After we got the calculated values for `y`

, we want to visualize the result to see if the training was successfull. To do this we are using Matplotlib to plot the data set used for training, as well as the resulting line created with our trained model.

```
plt.plot(x, y, "*")
plt.plot(x_test, y_pred, 'r')
```

As you can see the red line is representing the line function learned by our model accoding to the given data set. If you would go back and perform the training several more times it is most likely to produce an even better approximation of the line function with the smaller minimum mean squared distance between all data points. You can download the Jupyter Notebook with the code used in this tutorial here.

In case you are interessed what the learned values for `m`

and `b`

are, you can use the function **get_variable_value** of the LinearRegressor and either pass in `linear/linear_model/x/weights`

to get `m`

or `linear/linear_model/bias_weights`

to get `b`

.

```
m = regressor.get_variable_value("linear/linear_model/x/weights")
b = regressor.get_variable_value("linear/linear_model/bias_weights")
```

## Conclusion

**In conclusion we can say it was pretty easy to create a simple linear regression with TensorFlow using just a few lines of code. This was only a small example to get you started, but the same techniques and APIs can be used to train more complex models with more inputs (features).**