I would like to begin by thanking you for your interest and for taking out time to read this longish article. I hope that after reading and going through the pointers and learnings highlighted in this article (which effectively have been my experiences so far), you will feel motivated and excited enough to consider artificial intelligence /machine learning/data science as a career. I hope this brings about a positive change in your lives.
I am sure the word artificial intelligence (AI) would fascinate many people. If one says that he is a practitioner in this area or has created some algorithms/models that can perform predictions with a certain level of accuracy, self-learn, and auto-correct, recognize people; this, in turn, would fascinate a lot around him.
Artificial intelligence focuses on the theory and development of computer systems and programs/algorithms, which can perform tasks that are normally done by humans (ones that earlier required human intelligence/inputs). A few examples could be in areas of visual perception, speech recognition, decision-making, language translation, etc.
Today, organizations and individuals (including me) are experimenting a lot with AI technologies and data science– be it developing/making use of predictive models using datasets to forecast stock prices, weather patterns, crops, or building neural networks, face recognition technologies, self-aware/self-driving cars, or improved disease diagnosis process – it’s happening all around us (and almost across all industry verticals) at a swift pace. Many believe that with the onset of AI technologies – many jobs would be lost, unemployment rates would become high, artificially intelligent machines would take over, and people will become redundant. In my view, this is probably a myth. In fact, I feel that with AI and other new-age technologies, newer jobs would be created thereby giving individuals an opportunity to grow and do his job in a better way than what he was used to doing. It could also mean breaking the monotony of old ways of working and doing a job in an altogether new way.
Someone rightly said – “Change and evolution are the only constant things in the world”.
One should contemplate the following question in the case of an unfortunate situation of a job loss due to AI technologies/automation –
“Is it AI which has taken one’s job, or is it my attitude to not adapt and change with time that is responsible for this”?
Long story short – The key here is to recognize that there is a change happening around us, and one should try and adapt to the change (as much as possible) and go with the flow. We all have changed and evolved from being apes to human beings, and it is only natural to evolve further into something else. We all should remember (and it is good to be somewhat paranoid in this case) and understand that a human being and an employee too might become extinct if he cannot adapt himself with time towards changing environment and conditions around him. So, with all this change happening around us and the need to change for humans, comes the next important question (and restricting myself to only technology and the topic of AI):
Coming from a traditional technology background –
- What should individuals do to adapt themselves to be better prepared for this change?
- How can an individual who comes from a traditional technology/business background change himself, enhance his learnings, and maybe one day call himself an AI practitioner - and even develop algorithms, neural network designs/models for himself and not become paranoid about AI technologies/job loss due to it?
Through this article, I aim to share my viewpoints around how I have adapted myself towards this change. I can summarize my AI journey - so far - in 7 steps. I believe these 7 steps have helped me immensely in beginning my AI journey and have further motivated me to continue within it. Henceforth, taking the initiative to pen it down to benefit other beginners to chart their self-learning paths to excel.
I originally did not have these steps in mind for this journey – they happened to me (over the years) as I embarked on my learning process.
The instinct to "feel overwhelmed"
Every individual appreciates a little bit of attention – one way or the other during his lifetime. With the advent of social networking, sharing information and getting attention has become a lot easier. This is also one of the primary reasons for social networking sites to clock phenomenal double-digit growths and business. Using social networks, individuals try to highlight their achievements, successes, agonies, and knowledge. By highlighting all this, they aim to achieve that extra attention from their peers, friends, family, and well-wishers.
Coming back to the topic of Artificial intelligence - the process highlighted above and to highlight their knowledge/understanding/achievement on a subject/topic (in the professional world/otherwise); an individual sometimes uses (maybe not intentionally) difficult terms and jargons to explain a topic. Now, there might be people who follow what he is saying to the dot. At the same time (and I know for sure) - there would be masses who will not follow anything at all. It is exactly the point at which (and its only human) for those masses, who are not following anything; become overwhelmed and shut themselves out – when they see/hear people converse/use those esoteric terms/topics. In the context of data sciences and AI - unsupervised/supervised learning, neural networks, backpropagation, decision trees, self-learning, reinforcement learning, deep learning, feed forward mechanisms, etc., are some of the terms/topics that fit the condition that I talked about. Most of us would try to skirt these topics, and under normal situations not follow when we hear people conversing about them.
Here comes the first and foremost point to keep in mind. Many of us (including myself) are used to getting overwhelmed just by the mention/usage of the above-mentioned terms. These terms initially used to sound/appear so complicated (and high up) that just by hearing them, the natural human instinct would be to take over and say –
"This is too tough for me" OR "I need to have done advanced studies; maybe a Doctorate or Masters to even understand this" OR "This requires extensive mathematics, statistics, etc. (true maybe in some cases but generally speaking - "Its Manageable" - something which I will elaborate a bit more on going forward").
Lesson 1- STOP being overwhelmed
AI has been in existence since the late 1970s-1980s – it’s just now that people have started conversing so much about it in the mainstream and one’s daily life. Remember the movie “Terminator” (1980’s) where Arnold Schwarzenegger played a cyborg – a perfect combination of human and machine; a lethal killing machine; one so powerful that it used to recognize its subjects, self-learn, plan and even take decisions to kill humans to avoid a probable conflict with them at a future date. We have experienced AI, studied basic elements, which constitute the foundations of AI (at schools and colleges) since the start – one way or the other. For example, when we started doing linear algebra from grade 7 to 9, or studied matrices/determinants, statistics, areas under the curve from grade 10 onwards, or studied about vectors/scalars, differentiation/integration from grade 11 onwards till our advanced college degrees studying probability, regression, transform, state of machines, etc. – we touched upon parts of machine learning, data science and AI already. If you have done all this and passed with flying colors then, why get overwhelmed NOW?!
Take the first step by going through the exercise below to revise your key probability concepts.
Exercise – Probability Overview (Key Concepts Review)
Probability theory provides a framework for reasoning about likelihood of events.
- Experiment: procedure that yields one of a possible set of outcomes e.g. repeatedly tossing a die or coin
- Sample Space, S: set of possible outcomes of an experiment e.g. if tossing a die, S = (1,2,3,4,5,6)
- Event, E: set of outcomes of an experiment e.g. event that a roll is 5, or the event that sum of 2 rolls is 7
- Probability of an Outcome, s or P(s): number that satisfies 2 properties
- for each outcome s, 0 <= P(s) <= 1
- ∑ p(s) = 1
- Probability of Event, E: sum of the probabilities of the outcomes of the experiment:
- p(E) = ∑sCE p(s)
Liking it!!! So far -just remember – this is all doable
Algorithm Output – AI In Finance - Prediction Strategy For Stock Price Movement Against Market Returns
What is AI and what should I keep in mind as a beginner in this field?
In the simplest terms - AI is a branch of science and the real beauty of AI lies in the simplicity with which things are done within it. The branch has been created to make an artificially intelligent machine perform (through automation and self-learning) repeatable tasks that a human being was performing (and hence prone to human errors). The machine performs this task to improve accuracy, minimize errors, and further re-learn from its actions to perform even better and excel.
As an example - from our childhood till now – whatever we have seen, the things that we do or have done – from learning or recognizing alphabets, digits, words, sentences, grammar, raising your hands, moving your legs, or thinking (amazing thing to note here is that we learned all this naturally without even realizing and we perform all this naturally and effortlessly) - everything in one way or the other is an example of AI. Today’s machines are capable of performing all this and maybe beyond.
Lesson 2 - Be receptive to things happening around you by asking how’s and why’s
To do well in the field of data science and AI technologies, and as beginners in this field – be sensitive to things around you and always try to cultivate your mind to constantly ask questions.
- Why is this happening?
- How is this happening?
- When you see a bird, an airplane, a car, or a ship with your eyes – how many of you have asked a question to yourself – how am I able to do all this? What is the logic that the human body (eyes and brain in this case) is using to differentiate between them?
So, it is these constant “How’s and Why’s” to oneself that will come in handy to keep you excited and to begin your journey.
Algorithm Output – AI In Language Translation
Self-learn using the internet and get enrolled in an online course
These days, with the advancements in technology, data rates have gone southwards and the internet is becoming freely available to almost every individual on the planet – it is very easy to source information. There are many blog posts and websites dedicated solely to machine learning, data sciences and AI, deep learning, etc. One can just go through all this information on the internet and self-learn a lot. Many renowned universities and institutions have published their research papers across various subjects on the internet. They also have started offering online courses on the same topics, which are taught in person at the universities.
The online mode is something that has completely transformed the way learning is happening these days. Through this, acquiring knowledge on a topic has become extremely easy, convenient, and cost-effective for an individual. Until a few years back, courses on artificial intelligence, deep learning, and data sciences (offered by universities as in-person courses at their campus) were extremely expensive and probably beyond the reach of many of us. These days and through the online mode, the same course and content is offered at discounted rates and has made education and knowledge somewhat affordable to everyone. Add to it the convenience of learning something new from one’s home and in familiar environments.
The only thing that is needed at an individual level is to have that craving to learn a new topic, identify the relevant course material for that topic, and then dedicate some time to learn from the internet or get enrolled in a course to acquire knowledge.
Lesson 3 – Get enrolled to self-learn
Sites like Coursera, Google, Amazon, etc., have some amazing information available freely on this subject, and they offer online courses (at heavily discounted rates) from some of the top institutions worldwide. An individual just needs to be enrolled in a particular course and self-learn.
Some of the good sites for crash courses:
- Google's Machine Learning Crash Course: (developers.google.com/machine-learning/crash-course/)
- Introduction to Statistical Learning: (www-bcf.usc.edu/~gareth/ISL/)
AI – A use case-driven industry
Not everything (not literally speaking) can be AI-(tized) (if that at all is a word). This is a very use-case-driven industry, and it does not make sense to AI-tize everything and anything around you. Remember – what comes so naturally to a human being and human body (for example, understanding speech or differentiating a bird from a plane) – to replicate and build the same functionality in an algorithm or a piece of code – it takes a humungous amount of time and effort and naturally cost. So do your research and identify only those use cases which make sense to be AI-tized (and give real value) and not everything under the sun.
For example - An algorithm or a neural network that is designed to perform a specific task/activity will only perform that with precision. It becomes self-aware of its surroundings, learns from it, and does the job day in and out without getting tired and improving on the accuracy and efficiency with time through principles of self-learning. Within AI and as designers of the various AI models – we humans have to be very clear and precise in first identifying the use case, defining our requirements or asks from the model, and knowing in advance the outputs expected from the model. In the absence of this - It is entirely possible for the AI models to give some erroneous results that are not desired, and the same can go haywire – not to miss the escalating costs and expectation mismatches.
Lesson 4 – Choose your use case wisely
AI is a very use-case-driven industry. We need to be clear with the objective that your models are required to achieve right at the beginning and then we plan backward to create the algorithms/models/products.
Exercise – Modelling Taxonomy Review
There are many different types of models. It is important to understand the trade-offs and when to use a certain type of model.
Parametric vs. Nonparametric:
Parametric: models that first make an assumption about a function form, or shape, of f (linear). Then fits the model. This reduces estimating f to just estimating set of parameters, but if our assumption was wrong, will lead to bad results.
Non-Parametric: models that don't make any assumptions about f, which allows them to fit a wider range of shapes; but may lead to overfitting.
Supervised vs. Unsupervised
Supervised: models that fit input variables xi = (x1; x2; :::xn) to a known output variables yi =(y1; y2; :::yn)
Unsupervised: models that take in input variables xi = (x1; x2; :::xn), but they do not have an associated output to supervise the training. The goal is to understand relationships between the variables or observations.
Blackbox vs. Descriptive
Blackbox: models that make decisions, but we do not know what happens "under the hood" e.g. deep learning, neural networks
Descriptive: models that provide insight into why they make their decisions e.g. linear regression, decision trees
First-Principle vs. Data-Driven
First-Principle: models based on a prior belief of how the system under investigation works, incorporates domain knowledge (ad-hoc)
Data-Driven: models based on observed correlations between input and output variables
Deterministic vs. Stochastic
Deterministic: models that produce a single "prediction" e.g. yes or no, true or false
Stochastic: models that produce probability distributions over possible events
Flat vs. Hierarchical
Flat: models that solve problems on a single level, no notion of sub-problems
Hierarchical: models that solve several different nested sub problems.
Algorithm Output – Autonomous/Driver Assistance Systems
What do I need to do to start my journey in this field?
Next comes the most important question – I come from a traditional technology background - What should I do to get into this?
Academically speaking and in my opinion – I strongly believe that our education system needs a revamp. We are taught more theory than practical, and that needs to undergo a radical shift. As students, to all students and their guardians - We need to cultivate a culture of “Why this”, “How to do this” and “How do I apply this” instead of “Learn this”, “His grades are better” and “No marks = No good jobs”.
We need to move away from all this.
Lesson 5 – Brush up forgotten skills/pre-requisites to kick-start your AI journey
- As a pre-requisite - Brush up on your fundamentals and concepts of mathematics (especially around vectors, scalars, matrices, determinants, linear algebra, etc.), statistics (p-value, Theorems determining areas under the curve, etc.) before you start your journey.
- Pick up a programming language (could be anything – C++, Java, Python, etc.) and make a conscious attempt to start coding. When you code - you can see things in action. AI models producing results in front of you utilizing predicting values, differentiating between objects, or recognizing individuals to more advanced outputs like self-learn and correction - it’s a completely different feeling when you see an algorithm produce an expected output - learn and differentiate.
- Since AI is a very use-case-driven industry – it is important to identify an industry or a process and brush up on all the knowledge that you can acquire about that industry/domain with which you want to experiment. An example could be – supply chain process. It is a big-big process and comprises several sub-processes within it. So first, try to identify one of the sub-process, plot it from start to finish, understand the nuances involved within it and the problems being faced by the industry; which you wish to eliminate with your AI models.
Test what you have learned – apply the feedback loop to improve
Once you have experimented with all the above - it is very important to know where you stand vis-a-vis other people working in the same area. The moment you see things developed by you (or what others had developed) working in front of you – learnings/ideas start flowing.
Lesson 6 – Test your knowledge
For this, I would recommend (and I continue to do this very often, even after years of being in the industry) participating in various hackathons and coding contests around AI, IoT, machine learning, analytics, NLP, deep learning, etc.
Most companies have the following objectives from hackathons:
- Scout for good talent
- Test out various ideas that they see as potential trends or what their customers are demanding
- Through community learning and effort - build on to their objectives and prototypes
For a candidate, the learning could be:
- Hackathon topics are an important source to know about the general industry trends and demands by business houses
- Get noticed by top companies (if you perform well in a hackathon)
- Test your leanings against a large talent pool and build on to your learnings
So, in short – participate even if you don’t win or even if you know you don’t stand a chance to win…
Algorithm Output – AI Detecting Face Masks
Re-learn and don’t let your creative urge die
The last thing I would like to emphasize is not to let that creative side of yours die. The urge, the alternate view, being critical and suggesting ways to improve and do things differently should not die because of your current nature of work OR because you are taught in a certain way to perform a job. It is very important to try, experiment, and see things from a different perspective, have an alternate viewpoint, be critical about oneself, and have that urge to do things differently remain alive within you.
So, in a nutshell – the motto should be to
LEARN, TEST, APPLY, HAVE CREATIVE URGE INTACT AND RE-LEARN.
This article was also featured by NASSCOM in https://indiaai.gov.in/article/how-to-begin-your-ai-journey