All about deep learning

Deep learning is a form of machine learning that models data patterns such as complex and multilayer networks.

Deep learning can not only provide useful results where other methods fail, but can also build more accurate models than other methods and can reduce the time needed to build a useful model. However, forming deep learning models requires a great deal of computing power. Another disadvantage of deep learning is the difficulty of interpreting deep learning models.

The defining feature of deep learning is that the model being formed has more than one hidden layer between input and output. In most discussions, deep learning means the use of deep neural networks. However, there are some algorithms that implement deep learning using other types of hidden layers in addition to neural networks.

Image result for pytorch tutorial
Deep learning versus machine learning
I will refer to superficial machine learning as a classic machine learning to comply with general use.

In general, classical machine learning algorithms work much faster than deep learning algorithms; One or more CPUs are usually enough to train a classic model. Deep learning models often need hardware accelerators, such as GPU, TPU or FPGA for training and also for expansion. Without them, it would take months to train the models.

For many issues, some classic machine learning algorithms will produce a "good enough" model. For other problems, PyTorch tutorial learning algorithms have not worked very well in the past.

Deep learning applications
There are many examples of issues that currently require deep learning to produce the best models. Natural language processing (NLP) is good.

In the fall of 2016, the translation quality of Google Translate to the combinations of English-French, English-Chinese and English-Japanese dramatically improved, from the creation of a word salad to the production of sentences close to the quality of a professional translation done by a human. What happened behind the scenes is that the Google Brain and Google Translate teams renewed Google Translate and moved from using their old statistical algorithms to automatic translation based on sentences (a type of classical machine learning) to use a deep neural network formed in words using the Google TensorFlow Framework.

It was not an easy project. Many researchers at the doctorate level took months working on models and thousands of weeks on the GPU to train the models. He also urged Google to create a new type of chip, a drag processing device (TPU), to run scaled neural networks for Google Translate.

Another good example of the application of deep learning is the classification of images.

Comments

Popular posts from this blog

Tipps zur Büroreinigung - damit Sie gut organisiert sind!

Lifeguard Interview Question and Answer Tips

Lifeguard T-Shirts - Will Any Shirt Do?