Pages

Friday, July 29, 2022

Plaksha University Collaborates With University of California For Joint Research, Data Science Programmes








The Plaksha University, which is located in Mohali, Punjab, has signed an memorandum of understanding (MoU) with the University of California San Diego’s Halıcıoğlu Data Science Institute (HDSI). The two institutes have inked a pact for joint research and data science programs. Meanwhile, the Plaksha University, claims to build a tech university of “global eminence". Meanwhile,

The MoU covers areas like cooperation in the fields of computer science, data science and other mutually beneficial areas of interest for both the institutions, claimed the press release by the university.

As part of the pact, the two university will take up research internships. Meanwhile, special short-term programs, including executive training programs, workshops and visits will be take over. Additionally, student, scholar, researcher, and faculty mobility will take up joint research and development projects and publications and publications, materials, and information exchange will be taken up.

The MoU was signed by Prof. Rudra Pratap, the Founding Vice Chancellor of Plaksha University and Prof. Pradeep K Khosla, Chancellor of the University of California, San Diego at the Plaksha University campus in Mohali.
Advertisement


Earlier this year Prof Rajesh Gupta, who is Professor and Qualcomm Endowed Chair, Department of Computer Science and Engineering, UCSD joined Plaksha’s Academic Advisory Board and was the General Co-chair of Plaksha’s Conference on AI. He will lead this partnership from UCSD.

Prof. Rudra Pratap, the Founding Vice Chancellor of Plaksha University (former Deputy Director of Indian Institute of Science) said, “It is a great opportunity for our faculty and students to define their academic careers using the knowledge of both institutions.”

Prof. Pradeep K Khosla, Chancellor of the University of California, San Diego, said, “With the help of this collaboration, students from Plaksha University and HDSI will be tremendously inspired to work on engaging research projects and will have the opportunity to interact with faculty members from both institutes. We anticipate working with them on new and interesting projects.”


www.dprg.com

Tuesday, July 26, 2022

Data Science vs. Decision Science: What’s the Difference?





Data scientists and decision scientists do very different, though equally important, work. Here’s how to tell the difference.



At Instagram, we had many different job roles that analyzed data. A few of the data job titles included: data scientist, analyst, researcher and growth marketing.

There’s often a lot of confusion between the roles of data scientist vs. decision scientist.

We had both at Instagram and they fulfilled different needs, so I thought I’d explain the main differences I see from my personal experience in the decision science role, working closely with my data science colleagues.


DATA SCIENCE VS. DECISION SCIENCE


The data scientist focuses on finding insights and relationships via statistics. The decision scientist is looking to find insights as they relate to the decision at-hand. Example decisions might include: age groups on which to focus, the most optimal way to spend a yearly budget or how to measure a non-traditional media mix. For decision scientists, the business problem comes first; analysis follows and is dependent on the question or business decision that needs to be made.

DATA SCIENTISTS

Data is the Tool for Improving and Developing New Products Based on Robust Statistical Methods

Data scientists are looking to understand, interpret and analyze with the goal of building better products. Therefore, data quality, statistical rigor and measurement perfection are often their trademarks.

For data scientists, the analysis, statistical rigor and understanding comes first. Business challenges come second. Data scientists think about data in terms of data patterns, data processing, algorithms and statistics. Often, data scientists are conducting deep analysis and experimental statistics. They are obsessed with finding causal relationships.

Data scientists are deeply focused on data quality as it relates to their product area because better data quality results in more thorough statistical analysis.

Data scientists frame data analysis in terms of algorithms, machine learning, statistics and experimentation. They are looking to bring order to big data to find insights and learnings as they relate to their product or focus area. Theyhave a statistics lens to everything they do.
Data scientists’ north star goal: Use high-quality data and robust statistics to support product development.


DECISION SCIENTISTS

Data is the Tool to Make Decisions

Decision scientists frame data analysis in terms of the decision-making process. They are looking at the various ways of analyzing data as it relates to a specific business question posed by their stakeholder(s).

Other names for this role may include: analytics, analyst and applied analytics.

The data scientist focuses on finding insights and relationships via statistics. The decision scientist is looking to find insights as they relate to the decision at-hand. Example decisions might include: age groups on which to focus, the most optimal way to spend a yearly budget or how to measure a non-traditional media mix. For decision scientists, the business problem comes first; analysis follows and is dependent on the question or business decision that needs to be made.

The decision scientist therefore needs to take a 360-degree view of the business challenge. They need to consider the type of analysis, visualization methods and behavioral understanding that can help a stakeholder make a specific decision.

In other words, decision scientists need to make insights usable. They need to be able to work with a variety of data sources and inputs — each selected based on its ability to help answer the business question. This means a decision scientist needs to have a strong business acumen as well as a robust analytical mind. You cannot have one without the other in a decision science role.

Sometimes, measurement won’t be perfect. Business tactics aren’t always neat and tidy. For example, there is almost no clean way to create a test and control for viral or celebrity marketing, but these are both legitimate marketing approaches and the decision scientist needs to be okay with that. Businesses shouldn’t take an action so that it can be measured, but because it is the right thing to do; measurement comes next.

Sometimes a clean, causal experiment is possible and sometimes it isn’t. Decision scientists need to have a keen sense of when it’s appropriate to move forward with a decision based on correlations and when they need to push for a clean experiment. It all comes back to the business context and the decision at-hand.
Decision scientists’ north star goal: use data and statistics to support business decision making, budgeting and marketing spend.
Why Decision Science Matters

Data Science vs. Decision Science: In the Real World

In my own experience at Instagram, each data scientist was dedicated to one specific product or product feature. They spend a lot of time ensuring the data logging is accurate for that product area by running statistical analysis on trends and using complex visuals to display their type of analysis. They have a deep knowledge of their product, but not the ecosystem.

If the product changes or we launch new features attached to their product, the data scientist is responsible for both logging the new data and measuring the uptake of the new features.

On the flip side, I was in the decision science job group. My team and I supported the marketing group and the marketing leadership in helping them make decisions about marketing budgets and priorities.

I relied heavily on the tables, logging and analysis from my data science colleagues as the basis for our marketing activities. I then augmented their work with my own analysis to help our marketing leadership make decisions on where and when to spend marketing budget.

My visuals were designed for consumption and business action, and therefore had a different goal than the data scientists’ goal of using visuals to display complex analysis.

Because data scientists focus on one product area only, my analysis tended to look at relationships across products and the impact of demographics on product behavior at the company level.

My decision science team is the only team that looks at the full ecosystem on a regular basis because marketing decisions revolve around wanting to understand how one behavior interacts with another.

As you can hopefully see, there are some subtle but important differences here.

The decision scientist sits hip-to-hip with decision makers and management to help them make the best decisions for the business. Decision scientists are equal parts business leader and data analyst.

The data scientist sits hip-to-hip with data and statistical rigor. Data scientists are relentless about quality and deep analyses that drive products to scale and develop based on usage data.

Each role is necessary and critically important.

Decisions need to be made quickly to keep the business moving forward based on what is knowable now. This is the job of the decision scientist.

The business also needs to grow, scale and build better products. Deep product knowledge, a high standard of data quality and statistical rigor help ensure they’re pulling out the best insights so product leaders understand their domains. This is the job of the data scientist.

A business needs to both move forward with decision making while also improving its products for the longer term, so the decision scientist and the data scientist both contribute to the greater health of the company.


www.dprg.com

Thursday, July 21, 2022

Google highlights new machine learning features heading to Chrome


 Google uses machine learning (ML) models to offer a host of useful features on almost all of its products, and the company’s popular browser app, Google Chrome, is no exception. Chrome already offers several ML-powered features across all platforms. These include features to make web images more accessible for the visually impaired and real-time captions for videos to help those hard of hearing. But these aren’t the only ML features found on Google Chrome.

In a recent blog post, Google highlighted some recently released ML features that are now making their way to more Chrome users. Additionally, the company has revealed a few other new features that should reach users with future builds.Safe Browsing in Google Chrome, for instance, is an ML-powered feature that shows warnings when users try to navigate to dangerous websites or download malicious files. It has been around for a while, but Google recently rolled out a new ML model that identifies 2.5 times more potentially malicious sites and phishing attacks than the previous model, making it an even more invaluable tool for Chrome users. The feature can also silence potentially malicious notifications from websites, and Chrome will soon be able to do all this entirely on-device.


for details :

www.dprg.co.in



Sunday, July 17, 2022

Building explainability into the components of machine-learning models

for details :https://www.xda-developers.com/google-machine-learning-optimizations-chrome/


www.dprg.co.in



Researchers develop tools to help data scientists make the features used in machine-learning models more understandable for end users.

Explanation methods that help users understand and trust machine-learning models often describe how much certain features used in the model contribute to its prediction. For example, if a model predicts a patient’s risk of developing cardiac disease, a physician might want to know how strongly the patient’s heart rate data influences that prediction.

But if those features are so complex or convoluted that the user can’t understand them, does the explanation method do any good?

MIT researchers are striving to improve the interpretability of features so decision makers will be more comfortable using the outputs of machine-learning models. Drawing on years of field work, they developed a taxonomy to help developers craft features that will be easier for their target audience to understand.

“We found that out in the real world, even though we were using state-of-the-art ways of explaining machine-learning models, there is still a lot of confusion stemming from the features, not from the model itself,” says Alexandra Zytek, an electrical engineering and computer science PhD student and lead author of a paper introducing the taxonomy.

To build the taxonomy, the researchers defined properties that make features interpretable for five types of users, from artificial intelligence experts to the people affected by a machine-learning model’s prediction. They also offer instructions for how model creators can transform features into formats that will be easier for a layperson to comprehend.

They hope their work will inspire model builders to consider using interpretable features from the beginning of the development process, rather than trying to work backward and focus on explainability after the fact.

MIT co-authors include Dongyu Liu, a postdoc; visiting professor Laure Berti-Équille, research director at IRD; and senior author Kalyan Veeramachaneni, principal research scientist in the Laboratory for Information and Decision Systems (LIDS) and leader of the Data to AI group. They are joined by Ignacio Arnaldo, a principal data scientist at Corelight. The research is published in the June edition of the Association for Computing Machinery Special Interest Group on Knowledge Discovery and Data Mining’s peer-reviewed Explorations Newsletter.

Real-world lessons

Features are input variables that are fed to machine-learning models; they are usually drawn from the columns in a dataset. Data scientists typically select and handcraft features for the model, and they mainly focus on ensuring features are developed to improve model accuracy, not on whether a decision-maker can understand them, Veeramachaneni explains.

For several years, he and his team have worked with decision makers to identify machine-learning usability challenges. These domain experts, most of whom lack machine-learning knowledge, often don’t trust models because they don’t understand the features that influence predictions.

For one project, they partnered with clinicians in a hospital ICU who used machine learning to predict the risk a patient will face complications after cardiac surgery. Some features were presented as aggregated values, like the trend of a patient’s heart rate over time. While features coded this way were “model ready” (the model could process the data), clinicians didn’t understand how they were computed. They would rather see how these aggregated features relate to original values, so they could identify anomalies in a patient’s heart rate, Liu says.

By contrast, a group of learning scientists preferred features that were aggregated. Instead of having a feature like “number of posts a student made on discussion forums” they would rather have related features grouped together and labeled with terms they understood, like “participation.”

“With interpretability, one size doesn’t fit all. When you go from area to area, there are different needs. And interpretability itself has many levels,” Veeramachaneni says.

The idea that one size doesn’t fit all is key to the researchers’ taxonomy. They define properties that can make features more or less interpretable for different decision makers and outline which properties are likely most important to specific users.

For instance, machine-learning developers might focus on having features that are compatible with the model and predictive, meaning they are expected to improve the model’s performance.

On the other hand, decision makers with no machine-learning experience might be better served by features that are human-worded, meaning they are described in a way that is natural for users, and understandable, meaning they refer to real-world metrics users can reason about.

“The taxonomy says, if you are making interpretable features, to what level are they interpretable? You may not need all levels, depending on the type of domain experts you are working with,” Zytek says.

Putting interpretability first

The researchers also outline feature engineering techniques a developer can employ to make features more interpretable for a specific audience.

Feature engineering is a process in which data scientists transform data into a format machine-learning models can process, using techniques like aggregating data or normalizing values. Most models also can’t process categorical data unless they are converted to a numerical code. These transformations are often nearly impossible for laypeople to unpack.

Creating interpretable features might involve undoing some of that encoding, Zytek says. For instance, a common feature engineering technique organizes spans of data so they all contain the same number of years. To make these features more interpretable, one could group age ranges using human terms, like infant, toddler, child, and teen. Or rather than using a transformed feature like average pulse rate, an interpretable feature might simply be the actual pulse rate data, Liu adds.

“In a lot of domains, the tradeoff between interpretable features and model accuracy is actually very small. When we were working with child welfare screeners, for example, we retrained the model using only features that met our definitions for interpretability, and the performance decrease was almost negligible,” Zytek says.

Building off this work, the researchers are developing a system that enables a model developer to handle complicated feature transformations in a more efficient manner, to create human-centered explanations for machine-learning models. This new system will also convert algorithms designed to explain model-ready datasets into formats that can be understood by decision makers.


www.dprg.co.in




Wednesday, July 13, 2022

Enough with the AI washing!








In 2010, web developers and designers Dan Tocchini, Brian Axe, Leigh Taylor, and Henri Bergius set out to build perhaps the world’s first AI-based site builder. Just one and a half years later, Facebook offered USD 10-15 million to acquire The Grid– though they had no commercial product. In 2014, the team launched a crowdfunding campaign to raise USD 70,000.

“We’ve spent the last few years building a form of artificial intelligence that functions like your own personal graphic designer, able to think about your brand and present it in the best way possible,” said Dan, CEO and co-founder of The Grid. “The design adapts to your content, not the other way around.”

For a brief moment, all was well. The team was gearing up for a Spring launch in 2015 after raising USD 4.6 million in Series A.

But when the time came, only 100 of the 50,000+ backers from the crowdfunding got access to a Beta product. A year later, the company launched the final product and chaos ensued.

“AI is definitely over-hyped. Don’t buy into it or fall for it. AI is the cure-all tonic of the 21st century. It solves everything or could kill everything. At least that’s what science fiction would lead you to believe. Like the cure-all tonics of the early 20th century, AI won’t solve every problem or come close any time soon,” said Josh Greig, software developer at Next Healthcare Technologies.

“Elon Musk is quite brilliant but overestimates the speed and quality of software Tesla can produce for autonomous driving. Tesla has frequent delays and fatal car accidents related to its Autopilot technology as a result of this wishful thinking and rushed deployment. The hype pushes advertisers to use “AI” whenever anything software-related is used to solve a problem now,” he added.
Hyped much?

As per IDC’s latest reports, the global AI market is expected to cross the USD 500 billion mark in 2023. “AI, over the past few years, has become a critical addition for enterprise toolkits. Across industry surveys, and from our own experience, we are noticing that companies are reporting benefits of AI adoption on their bottom line. While researchers and many companies are experimenting with some exciting technologies, enterprise software is certainly among the most successful use cases for proving the utility of AI technologies,” said Onnivation’s founder & CEO Saket Agarwal.

According to Bert Labs’ Executive Chairman and CEO Rohit Kochar, AI is a general-purpose technology– just like electricity–reshaping the future.

“Currently, machines are intelligent enough to replace some mundane tasks, and automate some level of data processing and recognition. But they aren’t intelligent enough to make business decisions. So far, commercially available tech in the market provides machines (AI) that are able to process large amounts of data and identify and sort them, but aren’t capable of providing actionable insights,” said Dinesh Varadharaj, CPO, Kissflow Inc.



www.dprg.co.in

Saturday, July 9, 2022

The Future Of AI: 5 Things To Expect In The Next 10 Years



There has been no better time to be in the world of artificial intelligence than now. AI has achieved an inflection point and is poised to transform every industry. Much has already been written about specific applications of AI. In this article, I take a step back to consider how artificial intelligence is poised to fundamentally restructure broader swaths of our economy and society over the next decade with five bold predictions that are informed by my expertise and immersion in the field.


1. AI and ML will transform the scientific method.


Important science—think large-scale clinical trials or building particle colliders—is expensive and time-consuming. In recent decades there has been considerable, well-deserved concern about scientific progress slowing down. Scientists may no longer be experiencing the golden age of discovery.




With AI and machine learning (ML), we can expect to see orders of magnitude of improvement in what can be accomplished. There's a certain set of ideas that humans can computationally explore. There’s a broader set of ideas that humans with computers can address. And there’s a much bigger set of ideas that humans with computers, plus AI, can successfully tackle. AI enables an unprecedented ability to analyze enormous data sets and computationally discover complex relationships and patterns. AI, augmenting human intelligence, is primed to transform the scientific research process, unleashing a new golden age of scientific discovery in the coming years.

2. AI will become a pillar of foreign policy.

We are likely to see serious government investment in AI. U.S. Secretary of Defense Lloyd J. Austin III has publicly embraced the importance of partnering with innovative AI technology companies to maintain and strengthen global U.S. competitiveness.




The National Security Commission on Artificial Intelligence has created detailed recommendations, concluding that the U.S. government needs to greatly accelerate AI innovation. There’s little doubt that AI will be imperative to the continuing economic resilience and geopolitical leadership of the United States.


3. AI will enable next-gen consumer experiences.

Next-generation consumer experiences like the metaverse and cryptocurrencies have garnered much buzz. These experiences and others like them will be critically enabled by AI. The metaverse is inherently an AI problem because humans lack the sort of perception needed to overlay digital objects on physical contexts or to understand the range of human actions and their corresponding effects in a metaverse setting.

More and more of our life takes place at the intersection of the world of bits and the world of atoms. AI algorithms have the potential to learn much more quickly in a digital world (e.g., virtual driving to train autonomous vehicles). These are natural catalysts for AI to bridge the feedback loops between the digital and physical realms. For instance, blockchain, cryptocurrency and distributed finance, at their core, are all about integrating frictionless capitalism into the economy. But to make this vision real, distributed applications and smart contracts will require a deeper understanding of how capital activities interact with the real world, which is an AI and ML problem.

4. Addressing the climate crisis will require AI.

As a society we have much to do in mitigating the socioeconomic threats posed by climate change. Carbon pricing policies, still in their infancy, are of questionable effectiveness.

Many promising emerging ideas require AI to be feasible. One potential new approach involves prediction markets powered by AI that can tie policy to impact, taking a holistic view of environmental information and interdependence. This would likely be powered by digital "twin Earth" simulations that would require staggering amounts of real-time data and computation to detect nuanced trends imperceptible to human senses. Other new technologies such as carbon dioxide sequestration cannot succeed without AI-powered risk modeling, downstream effect prediction and the ability to anticipate unintended consequences.

5. AI will enable truly personalized medicine.

Personalized medicine has been an aspiration since the decoding of the human genome. But tragically it remains an aspiration. One compelling emerging application of AI involves synthesizing individualized therapies for patients. Moreover, AI has the potential to one day synthesize and predict personalized treatment modalities in near real-time—no clinical trials required.

Simply put, AI is uniquely suited to construct and analyze "digital twin" rubrics of individual biology and is able to do so in the context of the communities an individual lives in. The human body is mind-boggling in its complexity, and it is shocking how little we know about how drugs work (paywall). Without AI, it is impossible to make sense of the massive datasets from an individual’s physiology, let alone the effects on individual health outcomes from environment, lifestyle and diet. AI solutions have the potential not only to improve the state of the art in healthcare, but also to play a major role in reducing persistent health inequities.

Final Thoughts

The applications of artificial intelligence are likely to impact critical facets of our economy and society over the coming decade.



www.dprg.co.in



Wednesday, July 6, 2022

A First Small Step Toward a Lego-Size Humanoid Robot



When we think of bipedal humanoid robots, we tend to think of robots that aren’t

just human-shaped, but also human-sized.
When we think of bipedal humanoid robots, we tend to think of robots that aren’t just human-shaped, but also human-sized. There are exceptions, of course—among them, a subcategory of smaller humanoids that includes research and hobby humanoids that aren’t really intended to do anything practical. But at the IEEE International Conference on Robotics and Automation (ICRA) last week, roboticists from Carnegie Mellon University (CMU) are asked an interesting question: What happens if you try to scale down a bipedal robot? Like, way down? This line from the paper asking this question sums it up: “Our goal with this project is to make miniature walking robots, as small as a LEGO Minifigure (1-centimeter leg) or smaller.”

The current robot, while small (its legs are 15-cm long), is obviously much bigger than a Lego minifig. But that’s okay, because it’s not supposed to be quite as tiny as the group's ultimate ambition would have it. At least not yet. It’s a platform that the CMU researchers are using to figure out how to proceed. They’re still assessing what it’s going to take to shrink bipedal walking robots to the point where they could ride in Matchbox cars. At very small scales, robots run into all kinds of issues, including space and actuation efficiency. These crop up mainly because it’s simply not possible to cram the same number of batteries and motors that go into bigger bots into something that tiny. So, in order to make a tiny robot that can usefully walk, designers have to get creative.


for details : 



www.dprg.co.in

Friday, July 1, 2022

HUMANOID ROBOT SOPHIA ARRIVES IN KERALA

Social humanoid robot Sophia arrived in Kerala amid its journeys all over the world. 



Sophia, which is considered to be the best humanoid robot, reach.............

Thiruvananthapuram: Social humanoid robot Sophia arrived in Kerala amid its journeys all over the world. Sophia, which is considered to be the best h...

Read more at: https://english.mathrubhumi.com/features/technology/humanoid-robot-sophia-arrives-kerala-college-of-engineering-thiruvananthapuram-tech-fest-2022-1.7641400

for details : 

www.dprg.co.in