Comparing Eager Execution and Graph Execution using Code Examples, Understanding When to Use Each and why TensorFlow switched to Eager Execution | Deep Learning with TensorFlow 2.x

This is Part 4 of the Deep Learning with TensorFlow 2.x Series, and we will compare two execution options available in TensorFlow:
Eager Execution vs. Graph Execution
You may not have noticed that you can actually choose between one of these two. Well, the reason is that TensorFlow sets the eager execution as the default option and does not bother you unless you are looking for trouble😀. But, this was not the case in TensorFlow 1.x versions. Let’s see what eager execution is and why TensorFlow made a major shift with TensorFlow 2.0 from graph execution.
Eager Execution
Eager execution is a powerful execution environment that evaluates operations immediately. It does not build graphs, and the operations return actual values instead of computational graphs to run later. With Eager execution, TensorFlow calculates the values of tensors as they occur in your code.
Eager execution simplifies the model building experience in TensorFlow, and you can see the result of a TensorFlow operation instantly. Since the eager execution is intuitive and easy to test, it is an excellent option for beginners. Not only is debugging easier with eager execution, but it also reduces the need for repetitive boilerplate codes. Eager execution is also a flexible option for research and experimentation. It provides:
- An intuitive interface with natural Python code and data structures;
- Easier debugging with calling operations directly to inspect and test models;
- Natural control flow with Python, instead of graph control flow; and
- Support for GPU & TPU acceleration.
In eager execution, TensorFlow operations are executed by the native Python environment with one operation after another. This is what makes eager execution (i) easy-to-debug, (ii) intuitive, (iii) easy-to-prototype, and (iv) beginner-friendly. For these reasons, the TensorFlow team adopted eager execution as the default option with TensorFlow 2.0. But, more on that in the next sections…
Let’s take a look at the Graph Execution.
Graph Execution
We covered how useful and beneficial eager execution is in the previous section, but there is a catch:
Eager execution is slower than graph execution!

Since eager execution runs all operations one-by-one in Python, it cannot take advantage of potential acceleration opportunities. Graph execution extracts tensor computations from Python and builds an efficient graph before evaluation. Graphs, or tf.Graph
objects, are special data structures with tf.Operation
and tf.Tensor
objects. Whiletf.Operation
objects represent computational units,tf.Tensor
objects represent data units. Graphs can be saved, run, and restored without original Python code, which provides extra flexibility for cross-platform applications. With a graph, you can take advantage of your model in mobile, embedded, and backend environment where Python is unavailable. In a later stage of this series, we will see that trained models are saved as graphs no matter which execution option you choose.
Graphs are easy-to-optimize. They allow compiler level transformations such as statistical inference of tensor values with constant folding, distribute sub-parts of operations between threads and devices (an advanced level distribution), and simplify arithmetic operations. Grappler performs these whole optimization operations. In graph execution, evaluation of all the operations happens only after we’ve called our program entirely. So, in summary, graph execution is:
- Very Fast;
- Very Flexible;
- Runs in parallel, even in sub-operation level; and
- Very efficient, on multiple devices
- with GPU & TPU acceleration capability.
Therefore, despite being difficult-to-learn, difficult-to-test, and non-intuitive, graph execution is ideal for large model training. For small model training, beginners, and average developers, eager execution is better suited.
Well, considering that eager execution is easy-to-build&test, and graph execution is efficient and fast, you would want to build with eager execution and run with graph execution, right? Well, we will get to that…
Looking for the best of two worlds? A fast but easy-to-build option? Keep reading 🙂
Before we dive into the code examples, let’s discuss why TensorFlow switched from graph execution to eager execution in TensorFlow 2.0.
Why TensorFlow adopted Eager Execution?
Before version 2.0, TensorFlow prioritized graph execution because it was fast, efficient, and flexible. The difficulty of implementation was just a trade-off for the seasoned programmers. On the other hand, PyTorch adopted a different approach and prioritized dynamic computation graphs, which is a similar concept to eager execution. Although dynamic computation graphs are not as efficient as TensorFlow Graph execution, they provided an easy and intuitive interface for the new wave of researchers and AI programmers. This difference in the default execution strategy made PyTorch more attractive for the newcomers. Soon enough, PyTorch, although a latecomer, started to catch up with TensorFlow.

After seeing PyTorch’s increasing popularity, the TensorFlow team soon realized that they have to prioritize eager execution. Therefore, they adopted eager execution as the default execution method, and graph execution is optional. This is just like, PyTorch sets dynamic computation graphs as the default execution method, and you can opt to use static computation graphs for efficiency.
Since, now, both TensorFlow and PyTorch adopted the beginner-friendly execution methods, PyTorch lost its competitive advantage over the beginners. Currently, due to its maturity, TensorFlow has the upper hand. However, there is no doubt that PyTorch is also a good alternative to build and train deep learning models. The choice is yours…
Code with Eager, Executive with Graph
In this section, we will compare the eager execution with the graph execution using basic code examples. For the sake of simplicity, we will deliberately avoid building complex models. But, in the upcoming parts of this series, we can also compare these execution methods using more complex models.
We have mentioned that TensorFlow prioritizes eager execution. But that’s not all. Now, you can actually build models just like eager execution and then run it with graph execution. TensorFlow 1.x requires users to create graphs manually. These graphs would then manually be compiled by passing a set of output tensors and input tensors to a session.run()
call. But, with TensorFlow 2.0, graph building and session calls are reduced to an implementation detail. This simplification is achieved by replacing session.run()
with tf.function()
decorators. In TensorFlow 2.0, you can decorate a Python function using tf.function()
to run it as a single graph object. With this new method, you can easily build models and gain all the graph execution benefits.
Code Examples
This post will test eager and graph execution with a few basic examples and a full dummy model. Please note that since this is an introductory post, we will not dive deep into a full benchmark analysis for now.
Basic Examples
We will start with two initial imports:
timeit
is a Python module which provides a simple way to time small bits of Python and it will be useful to compare the performances of eager execution and graph execution.
To run a code with eager execution, we don’t have to do anything special; we create a function, pass a tf.Tensor
object, and run the code. In the code below, we create a function called eager_function
to calculate the square of Tensor values. Then, we create a tf.Tensor
object and finally call the function we created. Our code is executed with eager execution:
Output: tf.Tensor([ 1. 4. 9. 16. 25.], shape=(5,), dtype=float32)
Let’s first see how we can run the same function with graph execution.
Output: Tensor("pow:0", shape=(5,), dtype=float32)
By wrapping our eager_function
with tf.function()
function, we are capable of running our code with graph execution. We can compare the execution times of these two methods with timeit
as shown below:
Output: Eager time: 0.0008830739998302306 Graph time: 0.0012101310003345134
As you can see, graph execution took more time. But why? Well, for simple operations, graph execution does not perform well because it has to spend the initial computing power to build a graph. We see the power of graph execution in complex calculations. If I run the code 100 times (by changing the number parameter), the results change dramatically (mainly due to the print statement in this example):
Output: Eager time: 0.06957343100020807 Graph time: 0.02631650599960267
Full Model Test
Now that you covered the basic code examples, let’s build a dummy neural network to compare the performances of eager and graph executions. We will:
1 — Make TensorFlow imports to use the required modules;
2 — Build a basic feedforward neural network;
3 — Create a random Input
object;
4 — Run the model with eager execution;
5 — Wrap the model with tf.function()
to run it with graph execution.
If you are new to TensorFlow, don’t worry about how we are building the model. We will cover this in detail in the upcoming parts of this Series.
The following lines do all of these operations:
Output: Eager time: 27.14511264399971 Graph time: 17.878579870000067
As you can see, our graph execution outperformed eager execution with a margin of around 40%. In more complex model training operations, this margin is much larger.
Final Notes
In this post, we compared eager execution with graph execution. While eager execution is easy-to-use and intuitive, graph execution is faster, more flexible, and robust. Therefore, it is no brainer to use the default option, eager execution, for beginners. However, if you want to take advantage of the flexibility and speed and are a seasoned programmer, then graph execution is for you. On the other hand, thanks to the latest improvements in TensorFlow, using graph execution is much simpler. Therefore, you can even push your limits to try out graph execution. But, make sure you know that debugging is also more difficult in graph execution.
The code examples above showed us that it is easy to apply graph execution for simple examples. For more complex models, there is some added workload that comes with graph execution.
Note that when you wrap your model with tf.function(), you cannot use several model functions like model.compile() and model.fit() because they already try to build a graph automatically. But we will cover those examples in a different and more advanced level post of this series.
Congratulations
We have successfully compared Eager Execution with Graph Execution.
Give yourself a pat on the back!
This should give you a lot of confidence since you are now much more informed about Eager Execution, Graph Execution, and the pros-and-cons of using these execution methods.
Recent Comments