In every Deep or Machine Learning problem, the attempt is always to estimate the result as close to reality as possible. The process of this estimation involves a lot of computation, design, and optimizations. One of the key optimization involved is Hyperparameter tuning. It not only helps in improving accuracy of results but also helps to define the architecture of models. As we have seen in this article, there are considerable amount of automated/semi-automated and manual techniques available to tune hyperparameters. There are data scientist led heuristic processes too. It is said humans learn faster than machines in the current computational speeds hence it makes sense for a human to fine tune. However, as the size of the data increases and the problem becomes more complex it helps to know the tools and techniques which can automate the process. There are some libraries available which provides these as readymade functions but it’s worthwhile to know the tuning knobs and its effects. This article was a small attempt towards that endeavour.