# tensorflow
Tensorflow insights - part 6: Custom model - Inception V3
The VGG block boils down to only a sub-network that contains a sequence of convolutional layers and a max-pooling layer. Each layer is just connected right after another layer in a consecutive manner, which is exactly in the same way as all the networks that we used before part 4. For that reason, you might not have gained the full advantage of using the Tensorflow custom layer/model. In this post, we will get familiar with the idea of parallel paths and implement the Inception module which is used by the variants of the Inception network. To be practical, we will then show you how to implement the Inception-v3 network architecture. Throughout this post, you will see a lot more of the power of the Tensorflow custom layer/model.
LinkTensorflow insights - part 5: Custom model - VGG - continue
In the last part, we have shown how to use the custom model to implement the VGG network. However, one problem that remained is we cannot use model.summary() to see the output shape of each layer. In addition, we also cannot get the shape of filters. Although we know how the VGG is constructed, overcoming this problem will help the end-users - who only use our checkpoint files to investigate the model. In particular, it is very important for us to get the output shape of each layer/block when using the file test.py.
LinkTensorflow insights - part 4: Custom model - VGG
In this post, we will use the Tensorflow custom model to efficiently implement the VGG architecture so we can easily experiment with many variants of the network. The network architecture is deeper and will help us to increase the final performance.
LinkTensorflow insights - part 3: Visualizations
Deep learning is often thought of as a black-box function. Though cannot fully understand what a deep network does to give a prediction, we still have some visualization techniques to obtain information about what factors affect its prediction.
LinkTensorflow insights - part 2: Basic techniques to improve the performance of a neural network
In the previous post, we have talked about the core components of Tensorflow to train a model and how to use a dataset. In this post, we will continue to work on that dataset and show some basic techniques to improve the performance of a neural network. From the state of the previous code, the new code will be added right on it.
LinkTensorflow insights - part 1: Image classification from zero to a trained model
When we start a machine learning project, the first mandatory question is where we get data from and how the data is prepared. Only when this stage has been completed, does we can go to the training stage. In this tutorial, we first introduce you how to use and prepare the [Stanford Dogs dataset](http://vision.stanford.edu/aditya86/ImageNetDogs/); then we show the 5 core steps to implement a neural network for training in Tensorflow. _You can reach the code version of this post in [here](https://github.com/willogy-team/insights--tensorflow/tree/main/part1)._
LinkTensorflow - part 4: Graph in Tensorflow
They are the two types of execution in Tensorflow. Eager execution is easier to use, but Graph execution is faster.
LinkTensorflow - part 3: Automatic differentiation
Automatic differentiation is very handy for running backpropagation when training neural networks.
LinkTensorflow - part 2: Ragged tensors and tf.variable
In this post, we will tell about ragged tensors and ```tf.Variable```.
LinkTensorflow - part 1: Creating and manipulating tensors (continue)
In the previous part, we have shown how to create a tensor, convert a tensor to numpy array, some attributes of a tensor, math operations, concatenating and stacking, reshaping. Now, we will tell a litte more things about what we can do with tensors.
LinkTensorflow - part 1: Creating and manipulating tensors
When learning and working with machine learning, we have to get on well with tensors. In this tutorial, we we will show some of the ways to create and manipulate tensor in Tensorflow.
Link