Pages

Friday, January 28, 2022

Making smarter decisions with data analytics

High-value data and analytics can eliminate the guesswork in healthcare decision-making. Healthcare organizations with strong governance and good data analytics capabilities are better positioned to adapt to value-based care models.

The healthcare industry was starting to recognize the importance of high value data as they moved from fee-for-service reimbursement to value-based care models – but the COVID-19 pandemic made it even more apparent that hospitals need the ability to access and integrate data from multiple sources in order to make meaningful clinical and business decisions. TJ Elbert, Senior Vice President and General Manager of Data at Health Catalyst, said provider organizations, especially, were desperate to access trusted data as they treated patients in a virtual setting.

“There’s a real urgency when you are reliant on monitors and devices to get a full picture of a patient,” he pointed out. “That has increased the need for governance of all that data, from both inside and outside of the hospital, so providers feel like they can trust and then use that data to make decisions that impact care.”

Healthcare organizations with good data analytics capabilities in place, with strong governance, had a much easier time transferring to new models of care as they sought new ways to treat patients outside of the office setting, according to Elbert. In addition, they were also in a better position to make critical business decisions as the pandemic altered what kind of services they were allowed to provide.

“If you are able to use your data, you can better inform virtual care – and pivot to a more digital care model,” he said. “But more than that, you can make smarter decisions across the entire organization. Because when, all of a sudden, elective and outpatient surgeries are gone and you lose that revenue, you need to find a way to come back. Organizations with trusted data could figure out when it was safe to bring those services back, what capacity was needed to be for COVID-19, how to manage patient access to care and how to recover even as things kept changing. Having that information gives you a much fuller picture of what’s happening today – and where you need to be in the future.”

High-value data and analytics, truly, can eliminate the guesswork in healthcare decision-making, said Elbert. Unfortunately, he stated, many organizations still lack the tools to leverage the data they need, whether they are trying to facilitate team-based medicine and care collaboration or understand how to find the money to invest in new surgical technologies that will bolster their bottom line in the future.

“These capabilities can provide you the data to inform you where, when and how to shift – and how to do so in a way that puts your organization where it needs to be from a financial perspective so you are in a position to deliver the care your patients need,” said Elbert. “When you can do this modeling, you can tug on different threads and see what the impact will be, clinically and financially, to solve real problems. Data, really, is the key to allowing organizations to safely move forward with different initiatives – and, in doing so, improve patient outcomes and move the field, as a whole, forward.”


www.dprg.com

Monday, January 24, 2022

Identifying And Retaining Good Talent Is Crucial In Data Analytics: Vineet Shukla, Mahindra Group

 

Mahindra Group recently appointed Vineet Shukla as their Head of Data. Shukla has close to two decades of experience, of which he has spent a major chunk dabbling with data, AI and machine learning. He started his journey as a software engineer before doing an MBA from IIM Bangalore in business analytics. Before Mahindra Group, Shukla worked as the senior director for data science and machine learning at UnitedHealth Group, where he led the team and built an AI/ML practice from the ground up.

Analytics India Magazine caught up with Shukla for a detailed interview.


Edited excerpts:
AIM: You started your career as a software engineer. Why did you choose to transition to the field of machine learning and data science?
Vineet Shukla: Even when I was a software engineer, I used to work on tweaking algorithms. Owing to my deep interest in mathematics, I could make a smooth transition to data science. In this field, I got the opportunity to conceptualise and design a few ‘big data’ solutions that have helped organisations leverage their potential information assets to gain insights, leading to more efficient, effective, and competitive business decisions. I have collaborated with many key stakeholders (globally) in my career and built analytical solutions on big data, using state of the art tools and technologies (e.g. BigQuery, MapReduce, Hadoop/Hive, R, NLP, Tableau, etc.)


www.dprg.co.in

Thursday, January 20, 2022

Nonsense can make sense to machine-learning models

Deep-learning methods confidently recognize images that are nonsense, a potential problem for medical and autonomous-driving decisions.


For all that neural networks can accomplish, we still don’t really understand how they operate. Sure, we can program them to learn, but making sense of a machine’s decision-making process remains much like a fancy puzzle with a dizzying, complex pattern where plenty of integral pieces have yet to be fitted.


If a model was trying to classify an image of said puzzle, for example, it could encounter well-known, but annoying adversarial attacks, or even more run-of-the-mill data or processing issues. But a new, more subtle type of failure recently identified by MIT scientists is another cause for concern: “overinterpretation,” where algorithms make confident predictions based on details that don’t make sense to humans, like random patterns or image borders.

This could be particularly worrisome for high-stakes environments, like split-second decisions for self-driving cars, and medical diagnostics for diseases that need more immediate attention. Autonomous vehicles in particular rely heavily on systems that can accurately understand surroundings and then make quick, safe decisions. The network used specific backgrounds, edges, or particular patterns of the sky to classify traffic lights and street signs — irrespective of what else was in the image.

The team found that neural networks trained on popular datasets like CIFAR-10 and ImageNet suffered from overinterpretation. Models trained on CIFAR-10, for example, made confident predictions even when 95 percent of input images were missing, and the remainder is senseless to humans.

“Overinterpretation is a dataset problem that's caused by these nonsensical signals in datasets. Not only are these high-confidence images unrecognizable, but they contain less than 10 percent of the original image in unimportant areas, such as borders. We found that these images were meaningless to humans, yet models can still classify them with high confidence,” says Brandon Carter, MIT Computer Science and Artificial Intelligence Laboratory PhD student and lead author on a paper about the research.

Deep-image classifiers are widely used. In addition to medical diagnosis and boosting autonomous vehicle technology, there are use cases in security, gaming, and even an app that tells you if something is or isn’t a hot dog, because sometimes we need reassurance. The tech in discussion works by processing individual pixels from tons of pre-labeled images for the network to “learn.”

Image classification is hard, because machine-learning models have the ability to latch onto these nonsensical subtle signals. Then, when image classifiers are trained on datasets such as ImageNet, they can make seemingly reliable predictions based on those signals.

Although these nonsensical signals can lead to model fragility in the real world, the signals are actually valid in the datasets, meaning overinterpretation can’t be diagnosed using typical evaluation methods based on that accuracy.

For continuous reading : 

www.dprg.co.in

Monday, January 17, 2022

Machine learning models quantum devices

 

A novel algorithm allows for efficient and accurate verification of quantum devices

echnologies that take advantage of novel quantum mechanical behaviors are likely to become commonplace in the near future. These may include devices that use quantum information as input and output data, which require careful verification due to inherent uncertainties. The verification is more challenging if the device is time dependent when the output depends on past inputs. For the first time, researchers using machine learning dramatically improved the efficiency of verification for time-dependent quantum devices by incorporating a certain memory effect present in these systems.


Quantum computers make headlines in the scientific press, but these machines are considered by most experts to still be in their infancy. A quantum internet, however, may be a little closer to the present. This would offer significant security advantages over our current internet, amongst other things. But even this will rely on technologies that have yet to see the light of day outside the lab. While many fundamentals of the devices that can create our quantum internet may have been worked out, there are many engineering challenges in order to realize these as products. But much research is underway to create tools for the design of quantum devices.

Postdoctoral researcher Quoc Hoan Tran and Associate Professor Kohei Nakajima from the Graduate School of Information Science and Technology at the University of Tokyo have pioneered just such a tool, which they think could make verifying the behavior of quantum devices a more efficient and precise undertaking than it is at present. Their contribution is an algorithm that can reconstruct the workings of a time-dependent quantum device by simply learning the relationship between the quantum inputs and outputs. This approach is actually commonplace when exploring a classical physical system, but quantum information is generally tricky to store, which usually makes it impossible.

"The technique to describe a quantum system based on its inputs and outputs is called quantum process tomography," said Tran. "However, many researchers now report that their quantum systems exhibit some kind of memory effect where present states are affected by previous ones. This means that a simple inspection of input and output states cannot describe the time-dependent nature of the system. You could model the system repeatedly after every change in time, but this would be extremely computationally inefficient. Our aim was to embrace this memory effect and use it to our advantage rather than use brute force to overcome it."Summary:Technologies that take advantage of novel quantum mechanical behaviors are likely to become commonplace in the near future. These may include devices that use quantum information as input and output data, which require careful verification due to inherent uncertainties. The verification is more challenging if the device is time dependent when the output depends on past inputs. For the first time, researchers using machine learning dramatically improved the efficiency of verification for time-dependent quantum devices by incorporating a certain memory effect present in these systems.


www.dprg.co.in

Thursday, January 13, 2022

Use of robots, AI in defence applications rising: official


He calls enginnering graduates to join defence sector

With the wide use use of different types of robots in the defence sector, there is good scope for engineering graduates in this sector, said S. Krishna Kumar, Technical Officer, DRDO/CVRDE, (Defence Research and Development Organisation/Combat Vehicles Research and Development Establishment), Avadi, here on Wednesday.

Delivering a lecture at Velammal College of Engineering and Technology, as part of ‘Azadi Ka Amrit Mahotsav’ initiative, on role of DRDO in self-reliance and defence technologies, he said that application of Aritificial Intelligence (AI) and Machine Learning and use of robots was increasing. Sustained research had led to launch of robotic dogs, mules, snakes with Artificial Intelligence - suited all climatic and geographical conditions. He explained the different types of robots and the way they are trained.

For example, the dog robot can acclamatise itself to any region. It is trained in such a way that it would have walking practice in the morning and join a parade if used in defence applications. There are robots of all sizes, starting from the size of a mosquito. Such small-size robots are employed as a group to monitor a place. If they detect something wrong, it will pass the message to the control room. Mule robots are used to lift weights. It can even lift a car. Then there are robots in the shape of a scorpion and birds. Bird robots can track the path taken by a person, he said.


WWW.DPRG.CO.IN

Monday, January 10, 2022

10 HIGH PAYING ROBOTICS JOBS TO BE AVAILABLE IN INDIA IN 2022




These robotics jobs will emerge quite for tech enthusiasts in India in 2022

Global demand for AI talent has doubled in the past few than ever imagined before. Robots are becoming increasingly popular in the manufacturing industry, and according to experts, this demand is not cresting anytime soon. The substantial advancements in technology have spurred this period of robotics growth. With robotics companies constantly innovating new software and robotics features, along with accelerated robots’ production with increased payload capacities and improved reach, who knows how far the industry might extend in the next decade. India, being one of the strongest growing tech economies among the Asian emerging market, is rapidly moving towards tech automation. According to reports, the Indian industrial robotics market is expected to grow at a CAGR of 13.3% between 2019-2024. This phenomenon has also led to the emergence of numerous robotics jobs in India. This article enlists the top high-paying robotics jobs that will be available in India in 2022.

Robotics Engineer: A robotics engineer is responsible for designing, developing, and building robots that are productive, safe to operate, and economical to train. They are responsible for working with different teams of robotics developers, computer scientists, and engineers. In India, the average salary of a robotics engineer will depend on their experiences and qualifications.

Robot Programmer: A robot programmer’s career is often very detailed-oriented, and making one minor mistake can cause a chain reaction of several actions. The role includes system work, including customer support and training, and robotics programming. Generally, employers tend to look for candidates in electrical engineering, computer science, and other related fields.

Robotics Automation Engineer: An automation robotics engineer reviews all changes and modifications in the control plan to ensure their relevance and compliance with the operations resulting from continuous improvement activities as well as internal and external customer requirements and guiding the activities of robot technicians.

Software Developer: A software developer monitors, develops, and supports high-performance data processing systems, identify, design, and implement internal process improvements, including automating manual processes, optimizing data delivery, and more.


WWW.DPRG.CO.IN

Thursday, January 6, 2022

10 AI Predictions For 2022




1) Language AI will take center stage, with more startups getting funded in NLP than in any other category of AI.
Language is humanity’s most important invention. More than any other attribute, it is the defining hallmark of our species’ intelligence.
Naturally, language pervades every facet of every business activity across every sector. The ability to accurately automate language therefore opens up virtually unbounded opportunities for value creation.

2) Databricks, DataRobot and Scale AI will all go public.
These three companies are among the first wave of big winners in the modern AI economy. They each provide tools and infrastructure to help other companies build AI, reflecting the common theme across technology cycles that infrastructure precedes applications.

3) At least three climate AI startups will become unicorns.
Climate tech has rapidly become one of the hottest categories in the world of startups, with record amounts of venture capital pouring into the sector this year. As previously explored in this column, opportunities abound for startups at the intersection of climate and artificial intelligence.

4) Powerful new AI tools will be built for video.
Video has become the dominant medium for our digital lives. Over 80% of all Internet data in 2022 will be video, according to Cisco. Every day, 7 billion videos are watched on YouTube and 100 million videos are uploaded to TikTok. From Netflix to Amazon Prime Video to Disney+ to Hulu to HBO Max and beyond, Internet streaming services’ user bases and content libraries continue to balloon.

5) An NLP model with over 10 trillion parameters will be built.
The field of natural language processing (NLP) today is defined by the development of ever-larger transformer-based models. This arms race will continue in 2022 (notwithstanding intriguing recent work from DeepMind on the power of smaller models).
6) Collaboration and investment will all but cease between American and Chinese actors in the field of AI.
It is no secret that geopolitical tensions between the United States and China are ratcheting up, with cutting-edge technologies like artificial intelligence representing a particularly contentious touchpoint in the conflict. This will get worse—much worse—in 2022.
In just the past few weeks, the U.S. government added AI startup SenseTime, drone company DJI, and several other leading Chinese AI organizations to an investment blacklist. These are among the most important AI companies in China.

.7) Multiple large cloud/data platforms will announce new synthetic data initiatives.
Getting the right data is the most important and the most challenging part of building AI products today. Synthetic data offers compelling advantages over the status-quo approach of collecting and labeling real-world datasets.
Gartner has predicted that by 2024, synthetic data will account for 60% of all data used in AI development. Facebook’s acquisition of synthetic data startup AI.Reverie two months ago is a canary in the coalmine.


8) Toronto will establish itself as the most important AI hub in the world outside of Silicon Valley and China.
It is not an exaggeration to say that modern artificial intelligence was invented in Toronto, thanks to the work of deep learning pioneers like Geoff Hinton. Though it generates less buzz than other geographies, Toronto remains one of the most important AI hubs in the world.

9) “Responsible AI” will begin to shift from a vague catch-all term to an operationalized set of enterprise practices.
AI technology is improving faster than is our ability to deploy it responsibly, ethically and equitably.
A growing movement has emerged to advocate for the responsible use of AI, led by researchers like Timnit Gebru, Joy Buolamwini and Cathy O’Neill. This push for more responsible AI spans a broad set of issues including AI bias, data provenance, model explanability and model auditability.

10) Reinforcement learning will become an increasingly important and influential AI paradigm.
The dominant approach to AI today is supervised learning, which entails collecting a lot of data, labeling it, and feeding it into an AI model so that the AI learns useful patterns about the world. Unsupervised learning, a similar approach but without the need for human-generated labels, has also begun to gain traction in recent years.

More info : 

www.dprg.co.in

Sunday, January 2, 2022

Science Made Simple: What Is Artificial Intelligence?


What Is Artificial Intelligence, in Simple Terms?


Artificial Intelligence (AI) simply means intelligence in machines, in contrast to natural intelligence found in humans and other natural organisms. Artificial intelligence gained its name and became a formal field of research in 1956, and initial work led to new tools for solving mathematical problems. However, researchers discovered that creating an AI is incredibly difficult, and progress slowed in the 1970s. More recently, increases in computing power and availability of massive data sets have set the groundwork for advances in AI.


More info  

www.dprg.co.in