Willogy Insights

AI and Software Development enthusiasts. Knowledge is common. Our insights and experience on it is unique

Tensorflow insights - part 5: Custom model - continue

In the last part, we have shown how to use the custom model to implement the VGG network. However, one problem that remained is we cannot use model.summary() to see the output shape of each layer. In addition, we also cannot get the shape of filters. Although we know how the VGG is constructed, overcoming this problem will help the end-users - who only use our checkpoint files to investigate the model. In particular, it is very important for us to get the output shape of each layer/block when using the file test.py.

Link

Tensorflow insights - part 4: Custom model

In this post, we will use the Tensorflow custom model to efficiently implement the VGG architecture so we can easily experiment with many variants of the network. The network architecture is deeper and will help us to increase the final performance.

Link

Medical imaging analysis - part 1: A short survey

Medical imaging analysis is a distinctive branch of computer vision, which has its own standpoint in the domain. In this post, we will introduce some rudimentary knowledge about it and provide a brief recent context in the research community.

Link

Tensorflow insights - part 3: Visualizations

Deep learning is often thought of as a black-box function. Though cannot fully understand what a deep network does to give a prediction, we still have some visualization techniques to obtain information about what factors affect its prediction.

Link
Cover image

Artificial Intelligence and the COVID-19 Pandemic

Implementing AI in healthcare is getting increasingly popular, particularly during the COVID-19 pandemic. It helps anti-epidemic forces in diagnosing, classifying, and detecting severity and risk of death in the community.

Link

Tensorflow insights - part 2: Basic techniques to improve the performance of a neural network

In the previous post, we have talked about the core components of Tensorflow to train a model and how to use a dataset. In this post, we will continue to work on that dataset and show some basic techniques to improve the performance of a neural network. From the state of the previous code, the new code will be added right on it.

Link

Tensorflow insights - part 1: Image classification from zero to a trained model

When we start a machine learning project, the first mandatory question is where we get data from and how the data is prepared. Only when this stage has been completed, does we can go to the training stage. In this tutorial, we first introduce you how to use and prepare the [Stanford Dogs dataset](http://vision.stanford.edu/aditya86/ImageNetDogs/); then we show the 5 core steps to implement a neural network for training in Tensorflow. _You can reach the code version of this post in [here](https://github.com/willogy-team/insights--tensorflow/tree/main/part1)._

Link

Tensorflow - part 4: Graph in Tensorflow

They are the two types of execution in Tensorflow. Eager execution is easier to use, but Graph execution is faster.

Link

Tensorflow - part 3: Automatic differentiation

Automatic differentiation is very handy for running backpropagation when training neural networks.

Link

Tensorflow - part 2: Ragged tensors and tf.variable

In this post, we will tell about ragged tensors and ```tf.Variable```.

Link

AI and Neuroscience - Part 3: How can deep learning help neuroscience

From the part 2 of this series, we have known that the neuroscience of human brain is the inspiration behind deep learning. In this post, we will drive the topic in an opposite direction, which involves how deep learning can support the research of neuroscience theory. As a satisfactory side effect, you will definitely also gather the basics of deep learning in a concise and favorable way.

Link

Tensorflow - part 1: Creating and manipulating tensors (continue)

In the previous part, we have shown how to create a tensor, convert a tensor to numpy array, some attributes of a tensor, math operations, concatenating and stacking, reshaping. Now, we will tell a litte more things about what we can do with tensors.

Link

AI and Neuroscience - Part 2: Researches in AI can absorb knowledge from neuroscience

In this second part of "AI and neuroscience", we will tell about some aspects of the human brain on which Deep learning (DL) has already been based on and more importantly, some that have not been adapted to Deep learning (DL) yet. This post is mostly formed on the paper [1].

Link

Tensorflow - part 1: Creating and manipulating tensors

When learning and working with machine learning, we have to get on well with tensors. In this tutorial, we we will show some of the ways to create and manipulate tensor in Tensorflow.

Link

Reconstructing 3D model from images: a novice experience

Reconstructing a 3D model from 2D images is a very hard task. It requires one to know a lot of algorithms, 3D knowledge, and matrix computations to code a project from scratch. Luckily, with available open-source projects, we can lay down the burdens of coding.

Link

Self-Supervised Learning - Part 3: The idea of Amdim and comparison with two other contrastive learning approaches

Amdim, CPC, Deep Infomax

Link

AI and Neuroscience - Part 1: Their relation

Artificial intelligence and neuroscience have a very close relationship. Knowledge from neuroscience can be utilized for improving AI and it is also true in reverse.

Link

Explanation in AI and Social Sciences

Knowing what an AI model will do and why it does that is very important for researchers to evaluate and improve that model. There is even a research domain for it, which is Explainable Artificial Intelligence (XAI). The existence of this field will help to solve current AI problems of ethical concerns and a lack of credibility from users. To acknowledge this, the AI field had better gain knowledge from philosophy, psychology and, cognitive science of how humans define and evaluate explanations. The content of this post is based on [1].

Link

Self-Supervised Learning - Part 2: From Entropy to Augmented Multiscale Deep InfoMax

DeepInfoMax and Amdim are two self-supervised models that are very popular in recent times. They are constructed based on the idea of the InfoMax principle. Therefore, to fully understand these two models, we must first know about the underlying basis of the InfoMax principle which includes entropy, mutual information, their properties, and relations.

Link

When will AI exceed Human performance?

In the previous post, we have discussed some difficulties in AI. But what do scientists in this domain really think about AI? If you want to know, you should continue reading. This post is written as a brief summary of the survey “Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts” \[1].

Link
Cover image

Self-Supervised Learning - Part 1: Simple and intuitive introduction for beginners

> _In the speech at AAAI 2020, Yann LeCun described Self-supervised learning as "The machine predicts any parts of its input for any observed part"._

Link

4 Misconceptions in AI Research community

In recent years, AI has appeared a lot in the media. Is it really miraculous as people say, or just a hype? Let's get this problem enlightened a bit through exploring the sufferings of AI formation in the paper “Why AI is harder than we think”

Link