Schedules

Event Schedule

We are in the process of finalizing the sessions. Expect more than 30 talks at the summit. Please check back this page again.
Here is our schedule from last year.

Expand All +
  • Day 1 Bangalore

    January 22, 2020

  • At Freshworks, we are using machine learning to help enable support helpdesks to resolve their customers' issues more effectively. To do this, we are building systems for automatically routing, prioritising, and categorising incoming support tickets. Key challenges involved in this journey are algorithm choice, model quality, large numbers of models, maintaining model freshness, and system monitoring. I will be sharing the steps we took to address these challenges and launch our service.
    Main Hall - TECH TALKS

  • The idea is to spot people suffering from prolonged stress and how to offer a solution for the long term sufferers by predicting and analyzing their emotions using brainwaves recorded through Neurosky Brainwave Headset. The body produces larger quantities of the chemical cortisol and they trigger an increased heart rate, heightened muscle preparedness, sweating and alertness. Emotional stress is a primary factor to the six leading causes of death. It is a feeling of emotional or physical tension that makes a person feel frustrated, angry and nervous. They can be positive when it helps to avoid risk or meet a deadline. But when the stress lasts for a long time, it may ruin our health. To refrain from this situation, the individuals recognized with stress pattern are asked to listen to soft music and the brainwave pattern is recorded in response. We use neural network architectures with attention mechanism to identify the pattern and predict the emotional state of a person.
    HALL 2: KNOWLEDGE TALK

  • The last few years have witnessed Artificial Intelligence (AI) making quantum leaps into several domains such as healthcare, customer analytics, education, manufacturing, and so forth. With the changes in business strategy, personalized product recommendation has become an integral part of the customer experience journey. There has been extensive research leveraging collaborative filtering, market basket analysis and various other machine- learning / deep-learning based solutions for developing recommendation engines. The features such as customer behavior, historically purchased items, characteristics of similar users, etc. are used as inputs to the machine learning pipelines. In this paper, we present a novel approach using Graph algorithms for building a product recommendation solution for a publishing company. The publishing company has footprints with its customized education content, software and services at many world-renowned institutes and universities across the world. However, the company is experiencing a significant churn rate for their online platform for learning which is subscribed by various institutes. The developed solution is an ensemble of insights generated from several graph algorithms for centrality (personalized PageRank, betweenness & degree algorithm) and community detection (Louvain, strongly connected components, etc.) to control churn with product recommendation being the key step. The approach for recommendation generation for courses and books is based on the overall influential power of various other institutes and associated instructors in the network. Simultaneously, the approach focuses on the popular books and courses inside a local community identified by the graph algorithms to generate recommendations.
    HALL - 3 - CASE PRESENTATION

  • In the past decade the number of rail accident have increased dramati- cally.According to a study conducted in 2016-2017, 74 percent of the cases of rail accidents are because of lack of alertness(drowsiness) or negligence of the train drivers.According to a research 32000 animals were killed on the railway tracks between 2016 and 2018. Our paper gives complete solution to above mentioned critical issues using computer vision technology. i)Driver drowsiness is detected using an IR LED based tracking approach so that drowsiness can be detected in odd hours(mid-night) also,which is fitted inside the engine cab.This method is based on the spatial-temporal relationship which would detect the drowsiness more accurately.If the drowsiness is detected then a level I alarm is sounded. ii)Using a long range camera which is placed on the train engine,images are continuously monitored using streaming technique with a well trained object de- tection deep learning model using tensorflow as back-end to detect obstacles on the train track.Once an obstacle is detected a level II alarm is sent to the driver. This automized system helps the driver to be more productive in his job by helping him in the emergency situations to act quickly and in turn help save lives. In addition to saving lives this system also helps the Indian Railways in terms of monetary benefits by saving a lot of operational cost.
    HALL 2: KNOWLEDGE TALK

  • Computational Linguistics or the study of Natural Language Processing is the modern approach for a machine to understand and interpret language including its grammar, semantics, phonetics etc. by using large datasets and computational tools and techniques. Back in the 1990s, statistical machine learning methods began to replace the classical top-down rule-based approach to interpret languages, primarily due to accurate results, speed of processing and robustness of the algorithms. The advent of faster processors in the 2010s exponentially improved the performance of NLP algorithms. The need for clean and labeled training data also grew along with the emergence of faster processors and robust algorithms to improve the accuracy of the models. Statistical approaches have turned another corner and is now strongly focused on the usage of deep neural networks to both perform inferences on specific linguistic tasks and for developing robust algorithms. This paper focuses on a machine learning approach to perform word segmentation and Hierarchy detection on medical documents (in form of any editable digital documents like PDFs). When text is extracted from digital pdf using python libraries such as PDFMiner, the output is in the form of Scriptio continua - a style of writing without spaces between the words or sentences. One of the ways to use this unseparated raw text in a meaningful way is known as word segmentation, which is a process to determine the word boundaries in a sentence. In this paper, we present a machine learning approach to build a word segmentation algorithm and also find the hierarchical structure in the text (for example Header1 or Header2 etc.). We leverage the English Wikipedia dataset to build and train this advanced sequence to sequence model. The technique used in this paper has been successfully tested on medical domain data. We have also observed significant improvement in model accuracy by further training the algorithm with domain specific documents (in form of PDFs)
    HALL - 3 - CASE PRESENTATION

  • We are living in a world where every aspect of our life is shaped by technology. Through diligent research, new technologies are developed and then used to improve the well-being of every aspect of one’s life. In the case of natural disasters such as a flood, earthquake, or hurricane and even stampede, there is always a challenge in treating the victims on time and this could be because of the shortage of medical staff at relief camps and the victim being stuck at a remote location etc. The Global Call for Code challenge provides a platform to developers across the globe towards leveraging their coding skills in creating solutions along the theme of natural disaster preparedness and relief. Driven by the passion to "Code for a Cause" Pragun and Rachit, two young developers from IBM Software Labs developed Virtual Aid as a part of the Call For Code. VirtualAid is the first means of help to the people affected during a natural disaster from medical staff who can't be physically present in the disaster prone area. In the session, the presenters will demonstrate the capabilities of VirtualAid and discuss how the power of Watson APIs was leveraged in the development of the solution. The solution employs a programmed Drone mounted with a Pi Camera and the object detection module implemented within the Rasberry-Pi to differentiate between living things and non living things. Watson Visual Recognition service is used for image processing and the victim's responses are processed through Watson Speech to Text service and Sentiment Analysis performed by Watson's NLP Service. The developers will share their experience of developing VirtualAid and discuss key learning along the way
    Main Hall - TECH TALKS

  • Analyttica is a niche data science and advanced business analytics company focused on providing incremental business impact for its clients by developing custom innovative solutions for them in the predictive and prescriptive analytics space. With the advent of newer technology with greater computing power, big data and contemporary media channels, organizations have realized the importance of marketing mix modelling (MMM) and identified the benefits it entails that gives rise to cost saving opportunities and drives profitability. Marketers are under increasing pressure to move away from intuition based budgeting decisions to factual budgeting decisions, substantiated through quantitative evidence. In an attempt to understand how their marketing activities connect with real movements in sales and market share, the client, an Australian subsidiary of one of the world’s largest automotive manufacturer, wanted to conduct a MMM exercise and arrive at a clearer understanding of their Return on Marketing Investment. The client’s marketing strategy was designed around the themes of Media, Messaging and Brand Equity. Various levers were identified under each of these themes which are essentially the components of the marketing mix, to help assess the relative impact of marketing on enquiry and sales of the client’s product. Using historical marketing and sales data under each of the exhaustive components identified, Analyttica helped estimate the relative influence of the various components of the marketing mix, while controlling for other sales drivers such as seasonality. Proper capturing of the event and longevity of effect of the media channels on the outcome (in this case unit sales), is paramount for the success of any MMM application, which is heavily dependent on the suitable selection of the transformations such as decay (simple, logarithmic, exponential etc.), logistic, AdStock, and gamma. The appropriate selection of these transformation functions is highly contextual and driven by the skills, experience, knowledge and judgment of the modeler. As a surrogate to the knowledge, skill and experience of the modeler, we developed a heuristic optimization methodology for selection of the transformation functions for the media channels. The methodology considered six transformation functions competing against each other to arrive at the most appropriate one, that reduced the error of the model. This method is automated and has been developed into a prototype that can be applied in similar situations towards the process of measuring “Above the Line” (ATL) marketing effectiveness and optimization. The above exercise highlighted the potential costs and benefits of all the components, which was then weighed against one another in order to build a media mix solution, to arrive at the effectiveness of each of the ATL channels, to enable the decisions around objective allocation of the marketing budget across channels.
    HALL - 3 - CASE PRESENTATION

  • In these times of increasing complexity and competition, ensuring the right investments & sustenance of results is challenging. It becomes increasingly difficult when you are dealing with variables of time, budget & sponsorship with stakeholders to ensure we solve business problems through partnership to achieve the best results. At AB InBev, our journey at the Global Growth Analytics Centre started 3 years ago, to become trusted advisors and collaborators for our leaders in 2020. Our objectives are to transform our core ways of working which is fueled by data & technology in the future. In this session we would like to share our experience of this journey in translating analytical insights into actions that drive meaningful results for the company.
    HALL 2: KNOWLEDGE TALK

  • Much has been said & done on the first two streams of Analytics that is Descriptive & Predictive Analytics over the past decade. But while analytics have grown in terms of scope and scale, so too have the demands that businesses make of them. 2018 - 19 saw the rise of the next wave of modern analytics – prescriptive analytics, which is designed to find the best course of action for a given situation. Most companies are in very early stages of leveraging Prescriptive Analytics. Prescriptive analytics have traditionally lacked three major facets that drive true business value: scale, speed, and application. That’s where AI comes into play. Prescriptive models governed by AI can consider significantly more information, with much greater agility than their static human-guided counterparts, making their outputs more precise, relevant, and actionable. In this talk we will demystify Prescriptive Analytics & show you how ATH Precision – ‘Powerful Ecosystem for Business Integrated Analytics’ is being used by us to run Prescriptive AI for our clients.
    Main Hall - TECH TALKS

  • We live in the age of Siri and Alexa with machine learning algorithms set to help mankind make a quantum leap. In such a world, do you need humanness? This talk will touch upon humanness in the age of AI-ML and how it impacts careers in these future technologies. Springboard is a company built on humanness and this talk will throw light on how to become a better AI engineer while also helping the community of aspirants.
    Main Hall - TECH TALKS

  • With the growth of eCommerce, product discoverability is key to good experience. However, often brands do not have all their products online or have incomplete information. Infusing images, writing product descriptions for millions of SKUs can be daunting, especially manually. Arpita Sur & Vinay Mony, AI experts from Ugam, a Merkle Company, will discuss how they apply AI to deliver a seamless eCommerce experience. They will deep dive into some areas where Ugam’s proprietary cognitive computing system, JARVIS has been applied to deliver accurate results at scale. NLP for attribute extraction NLG for automated product description Computer vision for product matching
    HALL 2: KNOWLEDGE TALK

  • Product description on an e-tailer's website apprises a prospect about the product and it's benefits. A persuasive description can lead to conversion while a feeble one can steer a prospect towards competitors. A copywriter makes a compelling pitch by personalizing the description and tempting the prospect by using sensory words. He provides crisp benefits of product rather than making unproved claims. In this paper, I want to present an automated mechanism to assess the quality of product descriptions by scoring the usage of personal, sensorial, functional and superlatives in a description. This categorization is achieved through lexical, syntactic and contextual NLP techniques. Degree of personalization is measured by using constituency parser and synsets(wordnet). Sensorial context is confirmed using sentence encoder. Functional categorization is based on Open Information Extraction & Coreference Resolution. Verification of claims is done using POS tagging and pertainyms(wordnet). The quality of product description is presented using a normalized weighted measurement of degree of usage of personal, sensorial, functional and superlatives in a description.
    HALL 2: KNOWLEDGE TALK

  • A model is an object that represents information. For example, a regression model uses a straight line to represent the relationship between two variables. This talk briefly addresses the question of whether models and model building are still important in statistics, machine learning, and related areas. Two awards from 2019 give useful contexts: the Turing Award (the "Nobel Prize in Computing") for deep learning, and the Sveriges Riksbank Prize (the "Nobel Prize in Economics") for field experiments. The winners tell us how data can be used to predict and to prescribe, all through the use of well-designed models.
    Main Hall - TECH TALKS

  • This talk will focus on how to create the successful recipe for building AI into the existing traditional workflows of an organization. Machine Learning, Deep Learning and Data Sciences have evolved extremely fast in the recent past and practitioners find it difficult to keep in pace with the technology. The talk will cover the recent advancements in AI technology from both Infrastructure and Software perspective and recommend best practices learnt from multiple domains for setting up a successful AI practice.
    Main Hall - TECH TALKS

  • In world of Artificial Intelligence, Machine learning, etc., it is becoming simple for Business User to get required Insights, information or take action on complex Enterprise environment. But it still needs some effort from users to achieve those along with maintaining required data protection of underlying system and get information on finger tips. This is due to multiple complexities of Enterprise landscapes which needs some manual effort to achieve required results. Even for testing of such cloud solution for Enterprise, we have to consider multiple combination of complexity and deployments which could be possible using different Authentication services (ADFS or SAML SSO, etc.), AppServer, or any other 3rd party software or any On- premise solution for leveraging Hybrid environment benefits. In other case, where one cloud application has to connect with solutions on different Cloud platforms, regions, or applications to form complex deployment for which effort for manual setup for validation after every release is significant. It can be achieved by automating these custom manual actions using RPA (Robotic Process Automation) framework using Machine Learning enabled Natural Language Processing system for providing inputs to system using voice, text, etc.
    HALL - 3 - CASE PRESENTATION

  • As we move into the future, global consumption and generation of data would increase drastically. By 2020, 13 billion megabytes of data would be generated every second. Along with rapid velocity of data ingestion, the need for speedier decision making will simultaneously rise. Today’s process of MLOps will become redundant as data scientists will start searching for faster ways to create production grade deployments with insightful user interfaces that communicate the power of the algorithms without getting stuck in long MLOps journeys. In this paper, we would be discussing about our work centered on a new python library called “Streamlit” which helps data scientists rapidly create production-grade visualizations with backend integration and quickly share their results with stakeholders to generate powerful insights. We have utilized Streamlit in creating a web-based tool which runs on top of an optimization code, reducing our algorithm development to front end deployment lead time from “3-4 weeks” to “3-4 days”. The Streamlit module provides a wrapper around python scripts with integration of texts, images, audio, video, data frames and interactive plots. It runs on a Streamlit server which is quite like Jupyter server and can be hosted locally like Jupyter notebooks or remotely on servers. In this paper, we will discuss the pros and cons of this module and possible areas of improvements.
    HALL 2: KNOWLEDGE TALK

  • To Unlock What’s Next, you need to be armed with the right tools – tools that help you navigate a fast-transforming digital ecosystem. The advancement of digital ways and technology is adding vastly to the pool of usable data, powering modern research and measurement techniques using Artificial Intelligence, Machine Learning, and similar methodologies. This session will give you a 360 degree view of how to uncover and discover the transformative tools that are required to address the business challenges of the future and make real headway by taking bolder, better, and faster decisions. We will discuss use cases talking about how ML algorithms have been tuned to give a level of accuracy and flexibility to be deployed at a global organization level. Our journey on this path has begun, and we will share the learnings of our transformation.
    Main Hall - TECH TALKS

  • The agenda of the talk has been broken down into two parts: 1. Query Understanding [30mins]: [Sonu Sharma] • NLP based Deep Learning Models for finding the intent of a Query in a particular taxonomy/categories: Description and Jupyter-notebook demonstration [20mins]: o Multi-Label/Multi-Class Classification Model from scratch in Keras o Feature Engineering in Spark Scala and pandas o Keras Functional APIs details in TF 2.0 o ImageNet moment of NLP - Latest invention in Word embeddings – ELMO and BeRT o Understanding Deep Neural Networks like Bi-directional Long Short-term Memory (BiLSTM) and character embeddings for language modeling • NLP based Deep Learning Models for Query Tagging with entities like Brand, Color, Nutrition, product quantity, etc. using Named Entity Recognition: [10mins]: o Building custom model in Tensorflow Estimator API o Traditional Word Embeddings like Glove, Fasttext etc. o Query (Text) Preprocessing o Sequence modeling using Convolutional Random Fields (CRF) o Saving and Restore heavy model in TF using SavedModel concept 2. Related Searches [20mins]: [Atul Agarwal] • NLP based Deep Learning Model for predicting Next Search Keyword – Model Description and Jupyter Notebook demonstration[20mins]: o Building Sequence to Sequence (Seq2Seq) model using Long Short Term Memory (LSTM) concept of Deep Neural Network in Keras o comparing different word Embeddings e.g. word2vec, fasttext, glove, etc. in popular AI framework such as gensim. o Keras Sequential APIs details in Keras o Similarity Search based on Facebook AI Research aka FAISS.
    HALL - 3 - CASE PRESENTATION

  • NLP is evolving like a ripple in the ocean of Machine Learning. Today most of the companies invest a lot in creating NLP models for their needs. It is customized for their needs. So Scaling the NLP model to other teams is always a challenge. Each time a team has to analyses the data and write NLP algorithm for the data. Solution: To address this problem, I am proposing a solution that creates Standard NLP framework that just needs the data of the customer. Customers do not need to invest more in creating NLP models and algorithm. Instead, they can just submit their data to NLP framework that automatically learns the intent And develop an algorithm for the particular data.
    HALL - 3 - CASE PRESENTATION

  • Netflix's unique culture affords it's data scientists an extraordinary amount of freedom. They are expected to build, deploy, and operate large machine learning workflows autonomously without the need to be significantly experienced with systems or data engineering. Metaflow, our ML framework (now open-source at metaflow.org), provides them with delightful abstractions to manage their project's lifecycle end-to-end, leveraging the strengths of the cloud: elastic compute and high-throughput storage. In this talk, we present our human-centric design principles that you can now adopt with ease.
    Main Hall - TECH TALKS

  • The most common question marketers ask now is - “Did my ad campaign cause the user to convert and generate more revenue for my brand or would that have happened anyway?”.Also, targeting randomly generated customers make them suffer from huge costs and weak response. The complexity of the ad-tech ecosystem is constantly growing with brands running marketing activities across multiple channels, new targeting capabilities, and formats. Due to this, traditional digital measurement metrics like cost per click, return on investment, cost per conversion, etc. just scratch the surface while measuring the impact of marketing strategies. This measurement gap leads us to look at the incremental lift as a metric to measure the impact of a marketing strategy. Incrementality testing is a mathematical approach to differentiate between correlation and causation. We formulated different approaches to calculate incremental lift that can be implemented in the digital marketing ecosystem. Viewability is one of the methodologies that we are using for calculating incrementality in which we are measuring the effectiveness of an ad by comparing the users who are exposed to an ad versus users that are not exposed to an ad. Our methodologies cover concepts of test environment setup, randomization, bias handling, hypothesis testing, primary output and understanding different ways of using this output. We used this output for strategy planning and optimizations, helping us in achieving higher campaign efficiency. Having a set of different approaches to calculate incrementality gives us the flexibility to cater to a wide range of test cases having different setup challenges and restrictions.
    HALL 2: KNOWLEDGE TALK

  • One of the major challenges in Educational NLP is to assess student responses to questions. Automatic short answer grading (ASAG) is the task of assessing short natural language responses questions using computational methods. Short Answers tests the ability of the student’s recall using natural language, unlike multiple-choice questions that evaluate only recognition. However, there is no easy way to evaluate short answers, resulting in manual evaluation and feedback putting an enormous burden on teachers in resource constraint countries like India. In this paper, we are presenting our research into this aspect by combining conventional and advanced Natural Language Processing (NLP) methods which include various embeddings that range from word level, sentence level and contextual. We are also presenting a comprehensive evaluation of various embedding techniques (word2vec, FastText, ELMo, Skip-Thoughts, Quick-Thoughts, FLAIR embeddings, InferSent, Google’s Universal Sentence Encoder and BERT) with respect to short text similarity. Our method helps in evaluating natural language-based short answers resulting in instant feedback for students. We also combined Explainable AI (XAI) insights on the logic behind scoring an answer thus providing feedback and improving our models.
    HALL - 3 - CASE PRESENTATION

  • Day 1 Hyderabad

    January 30, 2020

  • We are embarking on the golden age of machine learning. Many of the constraints that typically held back the application of machine learning in the real world are starting to disappear. AWS offers a significant breadth and depth of cloud services. In this session, we explore the democratization of machine learning and how the growth of cloud services makes it easy for customers to move from idea to production with machine learning.
    Main Hall - TECH TALKS

  • This talk will focus on how to create the successful recipe for building AI into the existing traditional workflows of an organization. Machine Learning, Deep Learning and Data Sciences have evolved extremely fast in the recent past and practitioners find it difficult to keep in pace with the technology. The talk will cover the recent advancements in AI technology from both Infrastructure and Software perspective and recommend best practices learnt from multiple domains for setting up a successful AI practice.
    Main Hall - TECH TALKS

  • We are living in a world where every aspect of our life is shaped by technology. Through diligent research, new technologies are developed and then used to improve the well-being of every aspect of one’s life. In the case of natural disasters such as a flood, earthquake, or hurricane and even stampede, there is always a challenge in treating the victims on time and this could be because of the shortage of medical staff at relief camps and the victim being stuck at a remote location etc. The Global Call for Code challenge provides a platform to developers across the globe towards leveraging their coding skills in creating solutions along the theme of natural disaster preparedness and relief. Driven by the passion to "Code for a Cause" Pragun and Rachit, two young developers from IBM Software Labs developed Virtual Aid as a part of the Call For Code. VirtualAid is the first means of help to the people affected during a natural disaster from medical staff who can't be physically present in the disaster prone area. In the session, the presenters will demonstrate the capabilities of VirtualAid and discuss how the power of Watson APIs was leveraged in the development of the solution. The solution employs a programmed Drone mounted with a Pi Camera and the object detection module implemented within the Rasberry-Pi to differentiate between living things and non living things. Watson Visual Recognition service is used for image processing and the victim's responses are processed through Watson Speech to Text service and Sentiment Analysis performed by Watson's NLP Service. The developers will share their experience of developing VirtualAid and discuss key learning along the way
    Main Hall - TECH TALKS

  • What changes when we try to address enterprise problems? Does our data even have information on questions we want answered? Does distribution of our production data look like the training data in the enterprise? [No: This assumption is almost always violated] What happens when our model does not capture the underlying data generating process? Does capturing correlations vs establishing causative relationships matter? What does it take to answer "what-if this did not happen" questions? How does problem formulation change what questions can be answered? We will examine three example problems solved in production to explore what it takes to successfully solve problems in the enterprise with a progressive move towards dynamic causal models.
    Main Hall - TECH TALKS

  • Today’s smartphones come with multiple sensors on board that can determine device orientation , acceleration in 3 dimensional space. In this paper, we have utilized these onboard sensors, to determine the pot holes, speed bumps and vehicles that require service. We employed unsupervised machine learning algorithms for our analysis on data acquired from our field trials, on variety of road conditions and vehicles. It was found using statistical techniques that, on board sensors were able to accurately provide capture required information and at the same time distinguish noise, We propose to use these technique to improve the ride quality, by pre-empting the driver of a pot hole, speed bump etc and there by improve safety and ride quality.
    Main Hall - TECH TALKS

  • India’s increasing internet penetration (more than half a billion users and counting) has helped drive the next phase of growth for e-commerce in India. This has led to a proportional growth for the e-commerce logistics (e-logistics) supply chain in the country. With this exponential growth, India is strategically positioned very nicely to solve for some of the most challenging problems facing the logistics industry, test the products in one of the toughest markets and take these solutions to the world. As one of the largest players in the e-logistics space, the depth and breadth of data produced in the network is huge. For e.g. over 50 million geo-codes are produced daily, along with additional information tagged to each geo-code; across first mile, mid mile / line haul and last mile legs of the network. Considering our work on the last mile allocation problem for more than 20,000 people across rural and urban India delivering anywhere between 500,000 to 1,000,000+ shipments daily, this paper discusses key tenets and levers of building an AI platform from scratch and specific challenges that are unique to AI products. “Last Mile” leg of the e-logistics supply chain in India contributes to about 30% of the entire delivery cost of a shipment and is a tough problem to solve, primarily because of variable demand and poor localization of customer addresses. Typical last-mile delivery model of an e-logistics company includes segregating geographic routes of a last mile center (serviceability) and one or more delivery executive/subcontractor assigned to these routes for deliveries. To achieve better unit economics or generate higher flexible capacity, some of these routes are also subcontracted to local “mom and pop” stores, since they can cross utilize their staff for doing local deliveries. However, the capacity of both local stores and delivery executives is limited and there is a cost and reliability difference between the two. Hence, to minimize the cost of last-mile deliveries and maintain appropriate service levels, an e-logistics company must determine an optimal allocation of shipments to their fixed fleet and to the sub-contractors. Other factors to system driven allocation include ensuring fairness and equal opportunity, utilizing local ground intelligence, complexity of delivering a shipment etc.
    Main Hall - TECH TALKS

  • Machine Learning has found its application in various practical domains. In this presentation we will look at how Deep Learning can be used to classify images. Specifically, we will look at Convolution Neural Networks which are a subset of Deep Learning Models to solve a classic problem of computer vision which is to differentiate between two sets of images. Finally we will compare the performance of machines as opposed to those of humans in recognizing images.
    Main Hall - TECH TALKS

  • Qualitative research is an important tool for gaining a broad understanding of the underlying reasons and motivations behind consumer decisions, and thereby, product success. Qualitative market research can be valuable when you are developing new marketing initiatives and you want to test reactions on the crowd and refine your approach. One of the powerful qualitative research methodologies is focus group discussion, in which a moderator assembles a group of individuals to discuss a specific topic, aiming to draw conclusions from complex personal experiences, beliefs, perceptions, opinions and attitudes of the participants through a moderated interaction. Focus group interviews are extensively leveraged in sectors like CPR and health- care. Focus Group discussion aids the Consumer Product and Retail sector (CPR) to learn the market pulse and develop better marketing strategies to resonate with the customers and boost the sales. An extensive range of topics central to health and illness has been studied using focus groups including the experience of specific disorders and diseases, violence and abuse, health care practices and procedures, health-related behaviours, and broader factors that mediate health and illness. Focus group discussions are conducted by moderators who keep the group “focused” and generate a lively and in-depth productive discussion to obtain a balanced input from a diverse group of people. The moderators should take notes of the discussion, analyse the opinions & expressions of participants and arrive at conclusions based on the observations. A biased opinion of the moderator could tamper the business as well as the relationship with the customers. In this paper, we devise a methodology to automate the role of the moderator in the Focus group discussions using Artificial intelligence. The proposed solution uses Machine Learning and Deep Learning to process the video and audio streams of a Focus Group recording to emulate the moderator's role. It summarizes the discussion with the facial expressions and tonality of participants being analysed and co-processed with the participant's positive and negative sentiments on the topics discussed. This implementation helps in reducing the manual effort put in by the moderator and the cost incurred in conducting the discussions. The solution would also eliminate any human bias by the moderator to arrive at an unbiased conclusion. This solution incorporates Neural networks, Machine Learning & Computer vision modules to perform facial emotion detection, sentiment analysis, tonality analysis and topic modelling.
    Main Hall - TECH TALKS

  • For the past five years, sequence learning models like RNNs and LSTMs have been popularly used for complex NLP tasks. We will begin with an overview of RNNs, LSTMs and encoder-decoder architecture variations. Attention in such models usually leads to accuracy improvements. Hence, we will discuss different attention variants. In the past two years, transformer based models have become more fashionable than recurrent ones. We will discuss transformer networks and then understand how they have been used in BERT, GPT2 and MT-DNN. Some efforts tried to combine the power of recurrent models and transformers leading to short-lived glory in the form of Transformer-XL and XLNet. After all this, we will end our journey with state-of-the-art models like RoBERTa, ALBERT and T5.
    Main Hall - TECH TALKS

  • According to research conducted by Forbes, around 2.5 quintillion bytes of data is being generated every day. In the modern world, data is worth more than gold if meaningful insights are successfully extracted from it. There have been multiple studies that focus on extracting insights from this huge pile of data but none of them provides a satisfactory impact analysis. Impact analysis works like our traditional approach to calculate impact based on social proofing. Our model is capable of processing huge amounts of data from multiple sources like tweets, websites, documents etc., and provides deep insights by identifying the context of the data which has been provided to it. The solution works over Bidirectional Encoder Representations (BERT) which is a neural network-based technique for Natural Language Processing (NLP) pre-training. Initially, the solution is provided with multiple links to the articles which may contain information related to the event over which impact analysis is to be performed. The links can be of multiple sorts like Blogs, News articles, Twitter channels, Hashtags, etc. The solution will go through each individual link and process it based on the type of the link. The more relevant the data sources are, the more relevant would be the end analysis after analyzing all individual resources. We tested our model on “US CHINA TRADE WAR” and used multiple data sources to let the model extract useful insights like – 1.) China’s growing influence in Latin America that threatens America’s dominance”. 2.) More elaborative role in Europe marginalizing Russia’s dominance.
    Main Hall - TECH TALKS

  • Day 2 Bangalore

    January 23, 2020

  • Machine Learning has come a long way from being a topic of interest for academicians and research labs to the everday usage in our personal lives. The topic coverage starts from the basics of Linear Algebra & Data Mining to all the way upto the current state of art Deep Learning that are being used to solve computationally difficult problems. As we move towards analyzing complex data to build the state-of-art Artificial Intelligence, there is a need for paradigm shift in how problem are being solved. There has been a renewed interest to the studies of natural systems that often holds the key to algorithms for complex problems.Without going into the details of the underlying mathematics, the talk aims at relooking at the impact of Computational Intelligence to Quantum Machine Intelligence.
    Main Hall - TECH TALKS

  • Detecting language nuances from unstructured data could be the difference in serving up the right Google search results or using unsolicited social media chatter to tap into unexplored customer behavior (patients and HCPs). Also, due to the complex regulations and compliances, the Healthcare and Life Sciences industry is known to be slow in adoption of Text analytics and Natural Language Processing. Industries are facing significant challenges in analyzing the data due to its unstructured textual nature. Because of massive digital disruption across the globe, there is sharp rise in the generation of naturally written forms of electronic data. This explosive growth of unstructured clinical data, medical data, regulatory data and healthcare data has prioritized the use of innovative technologies of NLP and Text Analytics. The key challenges in adoption of NLP are – exponential growth in unstructured data in the form of unstructured texts from various business teams within the organization. This ever-growing data remains untouched to identify actionable insights and recommendations that can generate significant value for consumers and patients across the globe. Therefore, detecting potential actions and recommendations from unstructured data could be the key difference in serving up the right insights or deep dive into untouched behavior and actions of physicians and patients. Along with this, the advent in internet and IoT devices are also generating significant amount of data for creating additional value in the market using Natural Language Processing. Many healthcare and life sciences business organizations are progressively moving towards adopting AI driven NLP and Text Analytics capabilities which would help get improved and near-real time insights in unstructured data to derive better results for improved performance across products. This talk will explain our strategy & thought leadership towards adoption of NLP & Text Analytics in Healthcare and Life Sciences industry.
    HALL 2: KNOWLEDGE TALK

  • To date, natural language processing has been extensively deployed in the domain of online media – Twitter, Facebook, News, and other widely available text resources such as IMDb movie reviews, Reuter’s data, etc. Different problems such as topic modelling, entity recognition, sentiment analysis have been tested and benchmarked on such standard “generic” datasets. However, Specialized domains, such as biomedical text, have their own complexity. Biomedical text typically comprises of two genres - scientific journal articles such as PubMed, and clinical documents. These data sources have a number of characteristics that make it difficult for using NLP – such as presence of parenthesized text, lack of tagged standard data for aforementioned NLP problems, and therapeutic area variance. Additionally, crucial information is present in tables and figures, which are in general difficult for NLP applications to handle. At ZS, the Advanced Data Science Team is at the forefront of solving these challenges and building new applications to drive efficiency in trial operations. In this workshop, we will walk participants through how AI (and NLP in particular) is transforming the pharma R&D landscape. We will then deep-dive into a specific challenge - biomedical entity recognition. Biomedical NER involves identifying biomedical entities such as diseases, drug and chemical compounds, p-values for statistical tests etc. Participants will be acquainted with multiple state of the art methods such as Conditional Random Fields, Bi-LSTM and Tree LSTM co-training that are currently being used to do Biomedical NER.
    HALL - 3 - CASE PRESENTATION

  • At Verizon, we are putting at our customers first, we innovate for our customers with the application of state of the art artificial intelligence models. We have applied AI in the areas such as customer call & conversational analytics & insights, predicting customer behavior, customer feedback analytics, speaker diarization (live agent & customer), video analytics, intelligent chat routing, fraud detection, auto-suggest with chatbots, auto-complete for live chat agents, etc.. In this presentation we will look into the underlying technology behind few of these solutions i.e Transformers.
    Main Hall - TECH TALKS

  • Artificial Intelligence (AI) is now being embraced across a broad range of industries such as retail, manufacturing, education, construction, law enforcement, finance, and healthcare. AI is fast becoming integral to our daily lives - from image to facial recognition systems, machine learning powered predictive and prescriptive analytics, hyper-personalized systems, conversational applications, autonomous vehicles, identification of symptoms across diseases - the applications are numerous. With such a heavy reliance on the capabilities of AI, the need to trust these AI systems with all aspects of decision making is becoming critical. The predictions and prescriptions churned out by AI enabled systems are having a tremendous impact on how we view and experience life, death, and personal wellness. This is especially true of AI systems used in healthcare, driverless cars, or even drones deployed during warfare. However, most of us have little visibility and knowledge on how AI systems make the decisions they do. In the absence of this clarity, it is even more difficult to comprehend how the results are being applied and consumed across various fields. Many of the techniques and algorithms used for machine learning are either virtually opaque, or defy easy examination. This is largely true for most of the popular algorithms currently in use; specifically, deep learning neural network approaches. Fortunately for us, there is an aspect of AI, called Explainable AI, which can direct computer systems to operate as expected, and generate transparent explanations for decisions they make. In the future we will need to focus more on the Explainable AI component in order to further build our trust on AI systems that are used in decision-making. In this presentation, we will explore various algorithms, and techniques, that support ease of comprehension, and interpretability, of these machine learning models.
    HALL 2: KNOWLEDGE TALK

  • Airport infrastructure is massive investment for any city also realization of benefits takes time as building airport is itself takes span of airports. As per study of ICAO, airport plays catalyst role in growth of city. The output multiplier and employment multiplier of aviation is 3.25 and 6.10 respectively. This implies that every 100 Rupees spent on air transport contribute to 3.25 Rupees worth of benefits, and every 100 direct jobs in air transport result in 610 jobs in the economy as a whole. New airport initiatives create massive employment opportunities in airline, airport operations, aircraft maintenance, fuel suppliers, construction, transport like taxi and ,buses, airport outlets etc... Airport contributes to the local GDP. The contribution of air connectivity to economics of other industries, in terms of effect on tourism industry, reduced transport time, connectivity with major cities are catalytic or spin off effects of airports on cities. Airport planning has major role to play in any economy towards growth path. Full-proof, effective, transparent and near real time airport planning is essential for India. With advancements of Satellite imagery are used across the globe for monitoring and predicting farming, weather, environment and urbanization aspects. In this paper we are proposing airport planning mathematical model by monitoring urbanization pattern across all the cities and towns across country.
    Main Hall - TECH TALKS

  • Tensorflow is by far the most popular deep learning library – open sourced by Google. In a short period of time it has tremendously grown in popularity compared to the other libraries like PyTorch , Caffe and Theano.Tensorflow 2.0 has been released recently . This session will be a technical session with code demos and brief hands on ( using Google Colab) about building deep learning applications with Tensorflow. We will compare and contrast Tensorflow 1.x and Tensorflow 2.0. There will be something for everyone – those who are new to Tensorflow and Deep Learning , as well as the experts. There will be a lot to learn about Deep Neural Networks, Convolutional Neural Networks and image recognition.
    HALL 2: KNOWLEDGE TALK

  • Across industries, organizations and brands are seeking opportunities to drive topline growth by predicting individual customer journey events and customizing interventions to address corresponding gaps or opportunities. The estimated impact of predicting these events and addressing the leakages can be in the range of millions, to even billions of dollars. Machine learning approaches are being increasingly used to predict these customer events accurately. Feature engineering, and selection is core to these approaches. Typically, there is extensive effort undertaken to handcraft features and manually iterate through alternate models. However, this is increasingly challenging given the increasing volume, dimensionality and temporal nature of the data. Domain experts can hypothesize relevant factors to consider for specific customer events, but, identifying the right set of feature across numerous permutations is a matter of conjecture and combinatorically complex. Automated feature engineering and discovery approaches offer a viable alternative for handling this combinatorial complexity in an optimal manner. In this session, we seek to present an approach to that effect, inspired by evolutionary algorithms and key considerations in enabling it at scale.
    Main Hall - TECH TALKS

  • 1. Introduction to Neural Networks and CNN 2. Hands on CNN using TensorFlow and Keras on Google Colab Platform. 3. Various CNN architecture and Transfer Learning Techniques for pretrained CNN model. 4. Hands on Transfer Learning Techniques. 5. Convolutional Neural Networks for Object Detection and Segmentation.
    HALL - 3 - CASE PRESENTATION

  • Good generalized machine learning models should have high variability post learning. Tree-based approaches 2 are very popular due to their inherent ability in being visually representable for decision consumption as well as robustness and reduced training times. However, tree-based approaches lack the ability to generate variations in regression problems. The maximum variation generated by any single tree-based model is limited to the maximum number of training observations considering each observation to be a terminal node itself. Such a condition is an overfit model. This paper discusses the use of a hybrid approach of using two intuitive and explainable algorithms, CART 2 and k-NN 3 regression to improve the generalizations and sometimes the runtime for regression-based problems. The paper proposes first, the use of using a shallow CART algorithm (Tree depth lesser than optimal depth post pruning). Following the initial CART, a KNN Regression is performed at the terminal node to which the observation for prediction generation belongs to. This leads to a better variation as well as more accurate prediction than by just the use of a CART or a KNN regressor as well as another level of depth over an OLS regression.
    HALL 2: KNOWLEDGE TALK

  • Information Technology has improved the outdated systems what we think and live in olden days. Nowadays, it entirely changes our professional behaviour and conduct. The people who is lagging behind and not aware of its applications that suffer more and lose many new important opportunities and chances. The better utilization of its applications will provide expert guidance to execute day-to-day activities and organizational demands at fingertips access. In this present era of Operational Technologies, Industries worldwide becoming highly rely on the Technological services for its momentum, maturity, efficiency, consistency, reliability and high level of success Many enterprises have some form of Artificial Intelligence and Machine Learning applications in place, whether it is in the form of a pilot program, a proof-of-concept in the cloud, or even a production implementation. Even though Artificial Intelligence has been around for a long time, it is still an emerging technology for many enterprises. The Growing Global Economy and demand for customized products are bringing the Manufacturing Industry (Industry 4.0) from a Sellers’ Market towards a Buyers’ Market. In this talk, we focus on the importance of Cloud AI, which is a simple mantra for success of Industry 4.0. This success is only possible when we upgrade our industry and adapt the use of Cloud AI.
    HALL 2: KNOWLEDGE TALK

  • This talk discusses about the basic Wh questions (what, why, where, when and how) of AI DevOps and the challenges in adoption of deploying the ML/DL models. This covers couple of reference frameworks in this journey such as MLFlow, SageMaker and Azure ML service. Sunil covers couple of use cases from Telecom industry to illustrate the need for faster diagnosis and edge deployment. He highlights various trade offs one needs to handle in ML life cycle including the PLASTER framework. The key takeaways for the audience include important factors to be considered while designing and deploying the DL based solutions, take steps towards AI DevOps and corresponding research areas to work on.
    Main Hall - TECH TALKS

  • With the boom of the digital online platforms and social mediums, reviews posted by customers is making a huge impact on a buy or no buy decision. According to a recent survey, 85% of users read reviews of products , and 68% of them say they rely on reviews when making purchasing decisions. An incorrect or misleading review can turn out to be disastrous for customers who have fallen prey to fake positive reviews or conversely for the businesses losing potential customers basis spurious negative reviews posted by competitors or some unscrupulous agents. Sentiment Analysis (SA) / Opinion Mining(OM) has become one of the most essential components of text analytics due to its promising commercial benefits. One of the main issues in OM apart from extracting emotions and polarity is to detect fake positive reviews and negative reviews. Amazon says that out of the 1.8 million unverified reviews posted in March 2019, 99.6% were five-star. By comparison, during 2017-2018, the number of unverified reviews averaged fewer than 300,000 per month and only 75% were five-star. This talk will focus on 1. The ever-increasing problem of fake reviews and challenges in identifying them. 2. Annotating fake reviews 3. General framework for identifying fake reviews. 4. NLP techniques that can be leveraged for identifying fake reviews.
    Main Hall - TECH TALKS

  • The globally accepted solution, that solves climate change problem, is increased adoption of renewable energy sources for generation of electricity. This is leading to numerous wind power plants being set up all over the world. These power plants rely on the available wind energy for power generation. To harness all the possible energy from the wind, the rotor needs to be perpendicularly aligned with respect to the direction in which the wind is blowing. This is done through a control system called Yaw mechanism. The wind turbine is said to have a yaw error, if the rotor is not perpendicular to the wind. The share of energy that can be harnessed from the wind, drops at the rate of cosine of the yaw error. Thus, the correction of yaw errors is necessary to ensure lowered losses and optimized generation. The usual detection techniques include intuitive guesses made by expert site engineers working at the wind farm and through observation of generation patterns. These techniques are at their best, approximations of the actual scenario and only detectable when there are really large yaw errors. The delay in detection in these manual methods amounts to approximately 12% of loss in energy generation. This paper explains how machine learning is being used to detect yaw error right after its onset, identify the root cause and quantify the consequent energy lost. The detection of yaw misalignment is done by analyzing the wind direction, nacelle position and yaw angle data captured by the Supervisory Control and Data Acquisition (SCADA) system installed at the wind power plant. This analysis is then augmented with statistical tests to rule out errors in measurement and ascertain that the data is indeed indicating yaw misalignment. When the results prove to be statistically significant, the next step is to identify the root cause for the misalignment, so that an appropriate corrective action can be recommended to the site engineers. The probable root causes are presence of a faulty sensor, wrongly calibrated sensor or low yaw speed. These root causes are identified through pattern recognition algorithms, that look for different signatures, that each one leaves in the data. A non-linear regression model is then trained to learn the pattern of energy generation of a turbine in healthy condition. This model is then used to predict the energy generation of the turbine for which yaw misalignment has been observed. The difference in the actual and predicted values is the estimate of loss in power generation due to yaw misalignment. The adoption of this algorithm has helped in early identification of yaw errors and take corrective actions in near real time, thus preventing the loss in energy generation of up to 7-9% annually.
    HALL - 3 - CASE PRESENTATION

  • Well machines may have “artificial” intelligence but they want you to believe that they are humans. Can they behave like humans do?. Does it make sense or is it better to go all out and declare to a user the real truth?. If we want them to take human characteristics what can be done?. What is the state of the art in anthropomorphic systems and how can machine learning and NLP can help?. I am attempting to break into the surface of this problem in this talk. Come lets discuss Agenda 1. Anthromorphism origins 2. Anthromorphism in tech 3. Theoretical models 4. How to build Anthromorphic characteristics in a machine 5. Evidence of performance improvements
    Main Hall - TECH TALKS

  • The onset of cancerous cells caused by mutants in the foetal stages is an evolving mechanism that can be traced to pathological as well as clinical evidence that would require interpretations on a Machine Learning (ML) matrix. Random psychosomatic states in the antenatal stages have profound influences on the embryonic cellular development. These changes have the imprint on the pathological parameters of the patient and the progressive evolution of changes triggered by the mental states of relative well-being. Hormonal configurations are the essential markers in the psychosomatic reference frames that are fed into the ML model. The mathematical models explore the parametric influences of the pathological changes, the hormonal changes, the relative coordinates of the fetus, the relative growth ratio of the brain and the body and finally the functional MRI derivatives for cell morphology and cytoplasmic changes that signal potential mutants in the embryonic development evolution. The algorithm development factors in the measurable variables in the development cycle of the fetus and crystallize the probabilistic computations of predicting the impact on the cytoplasmic growth trajectory with the potential of conversion into mutants. The elements of fMRI are valuable parametric measures that help crystallize the predictive models albeit these are on theoretical models yet requiring the investigative rigors of applied clinical research for cancer and a host of psychosomatic diseases.
    HALL - 3 - CASE PRESENTATION

  • Enhancing memory-based collaborative filtering techniques for group recommender systems by resolving the data sparsity problem. Comparing the proposed method's accuracy with basic memory-based techniques and latent factor model. Making accurate predictions for unknown ratings in sparse matrices based on the proposed method. More users are satisfied of the group recommender system's performance. Memory-based collaborating filtering techniques are widely used in recommender systems. They are based on full initial ratings in a user-item matrix. However, most of the time in group recommender systems, this matrix is sparse and users' preferences are unknown. Recommendation systems are widely used in conjunction with many popular personalized services, which enables people to find not only content items they are currently interested in, but also those in which they might become interested. Many recommendation systems employ the memory-based collaborative filtering (CF) method, which has been generally accepted as one of consensus approaches. Despite the usefulness of the CF method for successful recommendation, several limitations remain, such as sparsity and cold-start problems that degrade the performance of CF systems in practice. To overcome these limitations, a content-metadata-based approach is suitable that uses content-metadata in an effective way. By complementarily combining content-metadata with conventional user-content ratings and trust network information, the approach remarkably increases the amount of suggested content and accurately recommends a large number of additional content items. Experimental results show a significant enhancement of performance.
    Main Hall - TECH TALKS

  • Predictive Insights Machine Learning Model proactively notify the Ops team about a potential issue going to occur on an application before even the issue occurs, by using the Deep Learning models developed using the past historical monitoring metrics data. This helped us to consider the whole application tier eco-system components like - App, Web, Message Queue, and Database systems to identify the problematic component causing the issue in a proactive approach.
    HALL 2: KNOWLEDGE TALK

  • Day 2 Hyderabad

    January 31, 2020

  • Artificial intelligence technology is now making its way into manufacturing, and the machine-learning technology and pattern-recognition software at its core could hold the key to transforming factories of the near future. AI will perform manufacturing, quality control, shorten design time, and reduce materials waste, improve production reuse, perform predictive maintenance, and more.
    Main Hall - TECH TALKS

  • Claims journey is perhaps the most pivotal aspect for any insurance contract, be it motor, commercial or personal health. While the industry has been operating around a manual paper intensive process, there is a steady adoption where insurance agents are relying on their machine counterparts, creating algorithms for fraud detection, claim propensity prediction & eventually setlling claims on behalf of the insurers, not based on heuristically pre-defined rules, but rather dynamically learned patters - that are revealed in historical claims reported. In this talk, I will specifically emphasise on novel methodoligies being employed by Insurers specific to Indian context, around claims settlement in Travel, Motor & Health Insurance verticals. I will also deep dive into the technical cloud architectures employed for deploying the algorithms at scale & monitoring them for any open ends through a robust feedback loop.
    Main Hall - TECH TALKS

  • Like any other technology, AI has been growing bottom up. We have mastered the art of building accurate AI models, deploying them at scale, and continuously improve them as more data comes. We have done that for a variety of AI API's now. We are now ready to build the next generation products that can utilize these AI API's as an Ecosystem and deliver products that will be vastly different from what we think of products today. We will explore what such products look like and how our AI and Product Thinking have to evolve to build such products.
    Main Hall - TECH TALKS

  • Siemens is one among the leading market players in the locomotives and mobility business across the world. Our platform Railigent (Rail + Intelligent) serves as a platform for the customers and clients to have a look at how the future of mobility is shaping up! There are many use cases in the rail world where images or videos can be used as unstructured data source. Just to name a few: Identify components which need maintenance and which part of the component exactly seems to have triggered a maintenance -> Save time and effort for engineers who can directly access these images and focus on the pre filtered components and parts for maintenance (Failure Prediction of components in trains in cities like Prague and London) Identify faulty joints in the rails by static cameras which are mounted on the train Monitor train station platforms with a CCTV camera and give an alert for congestions Energy Comparison and Consumption of the locomotives when the ecoCruise mode is ON or turned OFF Monitor number of passengers which get off a train and board a train Monitor number of passengers and location of passengers on a platform and many more, with the help of our ecosystem.
    Main Hall - TECH TALKS

  • Localization has been a big challenge with Indian language scenario. In our country the content in local languages are increasing. With the advent of internet, and social media contents and its reciprocal challenges have increased many a fold. The challenges are for the processing. Each linguistic community is in demand for the natural language processing (NLP) based tools, to analyze and to use. The detection of the author of the online content, as well as its author has been a big challenge for quite some time. It is important from the point of plagiarism analysis, and document analysis. The challenge is not harnessed much because of data. We have created this solution for the first time in Indian English, Hindi and Bangla. We have devised a combined solution for content detection and author identification. The work exploits three online monolingual corpora of plain text as well as Named Entity annotated text. We have used standard statistical classifier to report an impressive result.
    Main Hall - TECH TALKS

  • In current Scenario in 2019 Middle Class youngster or middle age person spends lot of his earned salary in loans and other expenses and save’s very less amount compared to the people in 2000. Solution Model  Through our research, we will look to apply Unsupervised Learning and Supervised Learning to achieve the following I. Different Clusters of income and expense to help identify pattern of income and expense for the scope of this paper we will use Bank Statement of the Individual II. Provide the Individual with the ability to visualize his income and expense pattern for the span of 5-10 years III. Prepare a predictive model for the Individual which help predict the areas of focus in terms of Savings and future spends
    Main Hall - TECH TALKS

  • In this talk I will describe some of the new challenges that we are exploring at Microsoft AI & R, Hyderabad. The talk will contain method level discussions, data acquisition strategies and in depth look at a few applications.
    Main Hall - TECH TALKS

  • Recruitment industry is a low margin industry where a recruitment consultant can spend days in closing job orders. A consultant may receive hundreds of job requests/ orders every day from their customers from various industries. These request orders can be extremely varied in nature in terms of skill requirements, business domain, experience, region etc. which creates a challenge for the recruitment companies to fulfil these job orders in timely fashion and focus on improving profit margins. Our solution focusses on helping recruiters prioritize job requests based on probability score of request completion thereby increasing fill rate – Number of job request successfully completed and reduce the time-taken, ensuring higher value return with relatively less effort. It also provides automation by eliminating human involvement for classifying job request as low, medium or high priority which is an existing practice, thereby further reducing cost and labour requirements. Our solution is entirely hosted on Azure cloud and leverages native Azure services for data collection, model management and monitoring capabilities. Also, the solution follows the micro- services architecture in terms of model deployment in Azure Kubernetes Service. The solution can be embedded in the recruiter’s applicant tracking system to generate real-time recommendation of complexity vs value.
    Main Hall - TECH TALKS

  • Predicting customer churn gives the opportunity to stem the leak in revenue base. It has the same impact as making marketing engine more effective. Reducing churn has the following strategic benefits: • Reduces marketing cost: Acquiring a new customer costs five times or more than that of retaining a customer. • Provides rentention insights: Churn analysis can provide important cues on retention allowing you to keep a tab on customers changing needs and preferences. • Foster long-term relationships and loyalty: By acting on insights from churn analysis, you remove bottlenecks and foster long-term customer relationships. In this churn prediction case study for a music streaming service, we have found that user activity attributes did not identify churning customers but transactional attributes contain potential patterns that help identify customer churn. We developed 10 base models and a two layered ensemble models. The ensemble model was the best and it predicted customers who are likely to churn with an Accuracy of 96% and F1-Score of 86.5%.
    Main Hall - TECH TALKS

  • Over 2.3 Billion people are affected due to floods in last 20 years and causing countless death , More than 92,million cattle are lost every year, seven million hectares of land is affected, and damage is over trillions dollars when taken globally in last 5 years. Floods are complicated natural events. It depends on several parameters, so it is very difficult to model analytically. The floods in a catchment depends on the characteristics of the catchment, rainfall and antecedent conditions. So the estimation of the flood peak is a very complex problem. Its due to the lack of Flood Prediction System which can predict the situation accurately. To Overcome this challenge we are building a Flood Prediction System using Predictive modelling. However we have divided our idea into small fragments but enough to be used globally. We have considered most flooded state of India, but can be used widely for all the low lying geographical regions. •The plains of Bihar, adjoining Nepal, are drained by a number of rivers that have their catchments in the steep and geologically nascent Himalayas. Kosi, Gandak,Burhi Gandak, Bagmati, Kamla Balan, Mahananda and Adhwara Group of rivers originates in Nepal, carry high discharge and very high sediment load and drops it down in the plains of Bihar. · About 65% of catchments area of these rivers falls in Nepal/Tibet and only 35% of catchments area lies in Bihar. · Bihar is India’s most flood-prone State, with 76 percent of the population, in the north Bihar living under the recurring threat of flood devastation. About 68800 sq Km out of total geographical area of 94163 sq Km comprising 73.06 percent is flood affected. · According to some historical data, 16.5% of the total flood affected area in India is located in Bihar while 22.1% of the flood affected population in India lives in Bihar. · From 1979 to Present day more than 8,873 Humans & 27,573 animals have lost their life due to flood. Some of Tools & Technology which is being used & can be used for Flood Prediction: •IBM. Watson Studio democratizes machine learning and deep learning to accelerate infusion of AI in to drive innovation. •An Intelligent Hydro-informatics Integration Platform for Regional Flood Inundation Warning Systems. •Three-Parameter Muskingum Model Coupled with an Improved Bat Algorithm. · Deep Learning with a Long Short-Term Memory Networks Approach for Rainfall-Runoff Simulation
    Main Hall - TECH TALKS

Check Last Year's Schedule

Schedule 2019

Extraordinary Speakers

Meet the best Machine Learning Practitioners & Researchers from the country.

  • Regular Pass

    AVAILABLE FROM 9th Jan to 29th Jan 2021
  • Access to all tracks & workshops
  • Access the recorded sessions later
  • Certificate of attendance provided
  • Access to online networking with attendees & speakers
  • 700 + taxes
  • Late Pass

    Available from 30th Jan 2021 Onwards
  • Access to all tracks & workshops
  • Access the recorded sessions later
  • Certificate of attendance provided
  • Access to online networking with attendees & speakers
  • 1,000 + Taxes