tf.reduce_mean: Calculate Mean of A Tensor Along An Axis Using TensorFlow

tf.reduce_mean - Use TensorFlow reduce_mean operation to calculate the mean of tensor elements along various dimensions of the tensor

Type: FREE   By: Sebastian Gutierrez, AIWorkbox.com Instructor Sebastian Gutierrez   Duration: 4:32   Technologies: TensorFlow, Python

Page Sections: Video  |  Code  |  Transcript


< > Code:

You must be a Member to view code

Access all courses and lessons, gain confidence and expertise, and learn how things work and how to use them.

    or   Log In


Transcript:

First, we import TensorFlow as tf.

import tensorflow as tf


Then we print the TensorFlow version that we are using.

print(tf.__version__)

We are using TensorFlow 1.0.1.

A frequent operation that comes up is that you want to get the mean value of a tensor along a certain dimension.

Sometimes you want the mean of all the elements of a tensor while other times you might want to get the mean of just a certain axis.


The way to do this is to use the tf.reduce_mean operation.


We’re going to define a TensorFlow constant to use for this video.

constant_float_ext = tf.constant([[[1.,1.,1.],[2.,2.,2.],[3.,3.,3.]],[[4.,4.,4.],[5.,5.,5.],[6.,6.,6.]]])

The TensorFlow constant is made up of floating point numbers.

We can see tf.constant and 1., 1., 1., 2., so on and so forth, all the way to 6, and it’s assigned to the Python variable constant_float_ext.


Now that we have created our TensorFlow constant, it’s time to run the computational graph.

sess = tf.Session()

So we launch the graph in the session.


Next, we initialize all of the global variables in the graph.

sess.run(tf.global_variables_initializer())


Rather than define results to variables and then print the results, we’ll just run the procedure and print the result immediately.


First, let’s get the mean across all of the elements of the tensor.

print(sess.run(tf.reduce_mean(constant_float_ext)))

We get 3.5.


To check, we can do the following addition using decimal points to make sure we are using floating point numbers.

1.+2.+3.+4.+5.+6.

So 1. + 2. + 3. + 4. +5. + 6. is equal to 21.0.


We divide 21.0 by 6.0 to get 3.5.

21.0/6.0


Since this addition is done three times and we divide by 18, we can say this is equivalent to getting the mean of all the elements of our constant_float_ext tensor.


All right. Next, let’s do the reduce_mean functionality across different dimensions.


To figure out how many dimensions we can reduce across, let’s figure out the rank of our tensor.

print(sess.run(tf.rank(constant_float_ext)))

We use tf.rank.

We pass in our constant tensor, then we print the result.

We get 3.


So we can reduce across three dimensions.

However, remember that Python is a 0-based index programming language, so we’ll be using the number 0, 1, and 2.


First, we print our constant tensor so we can double check the results visually.

print(sess.run(constant_float_ext))


Next, let’s get the mean value of the tensor along the first dimension.

print(sess.run(tf.reduce_mean(constant_float_ext, 0)))

So again, we use a 0 because Python is a 0-based index.

We can see that the result makes sense.

(1 + 4)/ 2 = 2.5;

(1 + 4)/ 2 = 2.5;

(1 + 4)/ 2 = 2.5.

2 + 5 = 7 / 2 = 3.5, the same across all the columns.

And then 3 + 6 = 9 / 2 = 4.5 across the columns.

So that makes sense and it means that we were able to get the mean value of the tensor along the first dimension.


Let’s reprint the constant_float_ext tensor so we can double check the results visually.

print(sess.run(constant_float_ext))


This time, we get the mean value of the tensor along the second dimension.

print(sess.run(tf.reduce_mean(constant_float_ext, 1)))

Again, we use the number 1 here because Python is a 0-based index, and we can see the results make sense.

1 + 2 + 3 = 6 / 3 = 2, same for the other two columns.

4 + 5 + 6 = 15 / 3 = 5, same for the other two columns.

So that makes sense and we were able to get the mean value of the tensor along the second dimension.


Let’s reprint our constant tensor so we can double check the results visually.

print(sess.run(constant_float_ext))


For the third and last dimension, we do the tf.reduce_mean.

print(sess.run(tf.reduce_mean(constant_float_ext, 2)))

We pass in our constant tensor, and we’re going to reduce across the third dimension.

Again, Python is a 0-based index so this is the number 2, and we can see the results make sense.

(1 + 1 + 1 )/ 3 = 1;

(2 + 2 + 2 )/ 3 = 2;

(3 + 3 + 3)/ 3 = 3;

(4 + 4 + 4)/ 3 = 4;

(5 + 5 + 5)/ 3 = 5;

(6 + 6 + 6)/ 3 = 6.

So that makes sense. We were able to get the mean value of the tensor along the third dimension.


Finally, we close the TensorFlow session to release all of the TensorFlow resources we used within the session.

sess.close()


That is how you can calculate the mean of tensor elements along various dimensions of the tensor by using the tf.reduce_mean operation.



Back to deep learning tutorial lesson list