HackCV

  • 日报
  • 资讯
  • 博客
  • 关于

  • A Gentle Introduction to the Challenge of Training Deep Learning Neural Network Models

    Better Deep Learning • 2019/02/15 02:00

    Deep learning neural networks learn a mapping function from inputs to outputs. This is achieved by updating the weights of the network in response to the errors the model makes on the training dataset. Updates are made to continually reduce this error until either a good enough model is found or the learning process gets […]

    The post A Gentle Introduction to the Challenge of Training Deep Learning Neural Network Models appeared first on Machine Learning Mastery.

  • How to Control Neural Network Model Capacity With Nodes and Layers

    Better Deep Learning • 2019/02/13 02:00

    The capacity of a deep learning neural network model controls the scope of the types of mapping functions that it is able to learn. A model with too little capacity cannot learn the training dataset meaning it will underfit, whereas a model with too much capacity may memorize the training dataset, meaning it will overfit […]

    The post How to Control Neural Network Model Capacity With Nodes and Layers appeared first on Machine Learning Mastery.

  • Framework for Better Deep Learning

    Better Deep Learning • 2019/02/11 02:00

    Modern deep learning libraries such as Keras allow you to define and start fitting a wide range of neural network models in minutes with just a few lines of code. Nevertheless, it is still challenging to configure a neural network to get good performance on a new predictive modeling problem. The challenge of getting good […]

    The post Framework for Better Deep Learning appeared first on Machine Learning Mastery.

  • How to Improve Performance With Transfer Learning for Deep Learning Neural Networks

    Better Deep Learning • 2019/02/08 02:00

    An interesting benefit of deep learning neural networks is that they can be reused on related problems. Transfer learning refers to a technique for predictive modeling on a different but somehow similar problem that can then be reused partly or wholly to accelerate the training and improve the performance of a model on the problem […]

    The post How to Improve Performance With Transfer Learning for Deep Learning Neural Networks appeared first on Machine Learning Mastery.

  • How to Avoid Exploding Gradients in Neural Networks With Gradient Clipping

    Better Deep Learning • 2019/02/06 02:00

    Training a neural network can become unstable given the choice of error function, learning rate, or even the scale of the target variable. Large updates to weights during training can cause a numerical overflow or underflow often referred to as “exploding gradients.” The problem of exploding gradients is more common with recurrent neural networks, such […]

    The post How to Avoid Exploding Gradients in Neural Networks With Gradient Clipping appeared first on Machine Learning Mastery.

  • How to Improve Neural Network Stability and Modeling Performance With Data Scaling

    Better Deep Learning • 2019/02/04 02:00

    Deep learning neural networks learn how to map inputs to outputs from examples in a training dataset. The weights of the model are initialized to small random values and updated via an optimization algorithm in response to estimates of error on the training dataset. Given the use of small weights in the model and the […]

    The post How to Improve Neural Network Stability and Modeling Performance With Data Scaling appeared first on Machine Learning Mastery.

  • How to Develop Deep Learning Neural Networks With Greedy Layer-Wise Pretraining

    Better Deep Learning • 2019/02/01 02:00

    Training deep neural networks was traditionally challenging as the vanishing gradient meant that weights in layers close to the input layer were not updated in response to errors calculated on the training dataset. An innovation and important milestone in the field of deep learning was greedy layer-wise pretraining that allowed very deep neural networks to […]

    The post How to Develop Deep Learning Neural Networks With Greedy Layer-Wise Pretraining appeared first on Machine Learning Mastery.

  • How to Choose Loss Functions When Training Deep Learning Neural Networks

    Better Deep Learning • 2019/01/30 02:00

    Deep learning neural networks are trained using the stochastic gradient descent optimization algorithm. As part of the optimization algorithm, the error for the current state of the model must be estimated repeatedly. This requires the choice of an error function, conventionally called a loss function, that can be used to estimate the loss of the […]

    The post How to Choose Loss Functions When Training Deep Learning Neural Networks appeared first on Machine Learning Mastery.

  • Loss and Loss Functions for Training Deep Learning Neural Networks

    Better Deep Learning • 2019/01/28 02:00

    Neural networks are trained using stochastic gradient descent and require that you choose a loss function when designing and configuring your model. There are many loss functions to choose from and it can be challenging to know what to choose, or even what a loss function is and the role it plays when training a […]

    The post Loss and Loss Functions for Training Deep Learning Neural Networks appeared first on Machine Learning Mastery.

  • Understand the Impact of Learning Rate on Model Performance With Deep Learning Neural Networks

    Better Deep Learning • 2019/01/25 02:00

    Deep learning neural networks are trained using the stochastic gradient descent optimization algorithm. The learning rate is a hyperparameter that controls how much to change the model in response to the estimated error each time the model weights are updated. Choosing the learning rate is challenging as a value too small may result in a […]

    The post Understand the Impact of Learning Rate on Model Performance With Deep Learning Neural Networks appeared first on Machine Learning Mastery.

  • Source
  • Erogol
  • Hacker News
  • MachineLearningMastery
  • AnalyticsVidhya
  • burakhimmetoglu
  • Kaggle Blog
  • Apple Machine Learning
  • Esty's Engineering Blog
  • Mostafa Samir
We collect, create and publish awesome free AI related resources. Learn More

Join our Newsletter for update.

浏览

  • 日报
  • 资讯
  • 资源

关于

  • 关于
  • 微博
  • Twitter

微信公众号

Copyright © HackCV, All rights reserved.- ↑ Back to top