Pages

Wednesday, March 30, 2022

METAVERSE IN BIG DATA ANALYTICS: WHY CIOS SHOULD BE ON ALERT?


 The Metaverse is reshaping the future of business. Why should CIOs be concerned about the metaverse in big data?

content include :

THE METAVERSE – ENVIRONMENTAL IMPACTS & FUTURE

HOW ARE BUSINESSES USING METAVERSE FOR MARKETING DEVELOPMENT?

HIGH TIME FOR COMPANIES TO HIRE A CHIEF METAVERSE OFFICER?

* The impact of a metaverse in big data analytics

* IT’s the next step in the metaverse

The metaverse is a blend of virtual reality, augmented reality, and mixed reality that will obscure the lines between online and real-life connections. It will end up being where individuals can work, shop, play, and be engaged. A scope of brands from Adidas to Balenciaga have moved into the metaverse, yet past this, there are a few brands that are now exchanging with non-fungible tokens (NFTs) in the virtual world.

In a joint effort with UNXD, an organized commercial center for digital luxury, D&G as of late sent off ‘Collezione Genesi,’ non-fungible tokens assortment that got around US$5.6 million at auction. Hoping to make this non-fungible tokens collection one of many, D&G’s goal is to grant buyers different levels of exclusive brand content and insider access. D&G is leading advancement in its area, perceiving the worth the metaverse can bring to the business, and revealing new information experiences around customer conduct and purchasing behaviors.


Saturday, March 26, 2022

WATCH OUT FOR THE EXECUTIVE PROGRAMS IN DATA SCIENCE

 

[Most of the executive programs in data science are not technical but business-oriented, know more about such programs ]

Highlight :

*Foundation in Statistics using R, business and management concepts,

*Analytical techniques using R, Python, SAS,

*Tableau and Power BI for Data Visualization

*Big Data concepts using Hadoop, HDFS, Pig, Hive, Sqoop, Flume, Spark, and Storm

*Advanced data wrangling, data mining, statistical modeling, and machine learning applications on datasets

Some may include basics about ANOVA, regression analytics, logistic regression, decision trees, cluster analysis, neural networks.

Furthermore, some courses offer the chance to study while pursuing careers with flexible part-time and online options. In the case of eLearning-based executive programs, the study material is delivered Direct-to-Device (D2D) on the student’s device of choice while the classes are held online. With the shortage of skilled personnel, it is the right time to sign up for a course now.


Monday, March 21, 2022

Humanoid robot Sophia to participate in Dubai internal auditors' conference

 



Highlight :

* Her appearance is part of the conference's efforts to highlight the role of AI in the future of the auditing industry

* Sophia, modeled after British actress Audrey Hepburn, Egyptian queen Nefertiti

and its inventor's wife, Amanda Hanson, was given Saudi citizenship in October 2017.



Sophia, the world's first robot to be granted citizenship, will be in Dubai on March 8 for the Annual Regional Audit Conference. The social humanoid will participate in a discussion about the future of artificial intelligence.

According to the UAE Internal Auditors Association, Sophia's participation is part of the conference's efforts to highlight the role of AI

in the future of the auditing industry. Sophia was created by Hong Kong-based Hanson Robotics.

"The organisers facilitated Sophia's presence at the smart conference as a symbol of the future of artificial intelligence," according to the association.

Various industries are increasingly embracing AI applications that significantly streamline and automate processes, whether autonomously or with the assistance of humans.

Investing in AI improves audit quality and lowers fees, according to a study conducted by the University of Southern California's Marshall School of Business.

It does, however, acknowledge that technology will eventually displace human auditors, though the impact on labor will take several years to manifest.


more imfo : 

www.dprg.co.in 


Friday, March 18, 2022

Robotics Research


Overview

The field of robotics has been undergoing a major change from manufacturing applications to entertainment, home, rehabilitation, search and rescue, and service applications. Although robots seem to possess fantastic skills in science fiction and movies, most people would be surprised to learn how much remains to be accomplished to provide today's robots with the ability to do relatively simple tasks. Autonomous robots are only able to complete very simple tasks within limited environmental conditions. Humans can be incorporated to teleoperate or supervise robots, but as the robot complexity increases so does the human's workload. Robotics requires research in many areas that include hybrid systems, embedded systems, sensory fusion, distributed artificial intelligence, computer vision, machine learning, human-machine interaction, localization, planning, navigation, etc. This large field provides ample research problems.

The Engineering School's Department of Electrical Engineering and Computer Science houses the Center for Intelligent Systems (CIS) that encompasses both the Cognitive Robotics Lab (CRL) and the Intelligent Robotics Lab (IRL). In addition to CIS, the department also includes six addition laboratories that conduct robotics research: the Computational Cognitive Neuroscience Laboratory (CCN), the Embedded Computing Systems Laboratory (ECS), the Embedded and Hybrid Systems Laboratory (EHS), the Human-Machine Teaming Laboratory (HMT), the Modeling and Analysis of Complex Systems (MACS) group, and the Robotics and Autonomous Systems Laboratory (RAS). Each individual laboratory provides a specific robotics research focus. The broad research areas include: biologically inspired robotic control (CCNL), cognitive robotics (CRL),
embedded systems (ECS, EHS), human-robotic interaction (HMT, IRL, RAS), humanoid robotics (CRL), planning (MACS), sensor networks (EHS),
hybrid robotic systems (EHS, MACS), mobile robot navigation (IRL), multiple robot coordination and cooperation (HMT), real-time systems (EHS), and rehabilitation robotics (RAS).

Topics

Biologically inspired robot control
Decision-Theoretic planning and control
Humanoid robots
Human-robot interaction
Hybrid and Distributed Control
Knowledge sharing among robots
Mobile robot navigation
Mobile sensor networks
Modeling, simulation and diagnosis
Multiple robot coordination and cooperation
Personal and service robots
Range-free perception-based navigation
Rehabilitation robotics
Sensory EgoSphere
Stochastic hybrid systems for multiple robot teams
Vision, image and signal processing systems


www.dprg.co.in

Wednesday, March 16, 2022

How to visualise different ML models using PyCaret for optimization?



In machine learning, optimization of the results produced by models plays an important role in obtaining better results. We normally get these results in tabular form and optimizing models using such tabular results makes the procedure complex and time-consuming. Visualizing results in a good manner is very helpful in model optimization. Using the PyCaret library, we can visualize the results and optimize them. In this article, we are going to discuss how we can visualize results from different modelling procedures. The major points to be discussed in the article are listed below.

Table of contents
What is PyCaret?
Visualizing a classification model
Visualizing a regression model
Visualizing a clustering model
Visualizing anomaly detection

What is PyCaret?

In one of our articles, we have discussed that PyCaret is an open-source library that helps us in performing a variety of machine learning modelling automatically. This library aims to make the process of machine learning modelling simple using a low code interface. Along with such a low code interface, this library provides modules that are very efficient and low time-consuming.

To improve the explainability and interpretability of the process this library provides various visualizations implemented using some of the modules. This library combines visualization from various famous packages like yellowbricks, autovig, plotly, etc. in this article using visualization modules from the PyCaret we are going to plot results from the different models.

Let’s start with a classification model.

Visualizing a Classification model

Let’s start the procedure by importing data. Since we have multiple datasets in the PyCaret library, we can utilize them for practice purposes. In this article, we are going to use a heart disease dataset that has multiple values related to medical conditions that classify whether a person can have heart disease or not. Let’s import it directly from the library. from pycaret.datasets import get_data data = get_data('heart_disease')


Output:

Here in the above output, we can see some of the values from the dataset. Now let’s set up a model.from pycaret.classification import * model1 = setup(data = data, target = 'Disease')


Output:

In the setup of the model, we have provided the name of the dataset and the name of the target variable. After optimizing the dataset, the setup module has provided information about the variables that the dataset consists of.

Let’s fit the model.random_forest = create_model('rf')


Output:

Here we can see the information about the accuracy and AUC scores of the model. Hereafter fitting the model, the main purpose of the article comes into the picture.

Visualizing results

One more thing that is great about the PyCaret library is that we can utilize its features for visualizing the results from the model that we have in tabular form.

Plotting the AUC scores:

plot_model(random_forest, plot = 'auc')

Output:

Here in the plot, we can see that we have a detailed visualization of AUC values from the model. There are various libraries like yellowbricks and autovig that help PyCaret to make such visualizations.

Let’s see the confusion matrix in a visualized form.

plot_model(random_forest , plot = 'confusion_matrix')

Output:

Here we can see the confusion matrix. We can also convert values under the confusion matrix in the percentage.

plot_model(random_forest , plot = 'confusion_matrix', plot_kwargs = {'percent' : True})

Output:

We can also plot results based on the training data by just passing use_train_data as True. Let’s plot the decision boundary of the model.

plot_model(random_forest, plot = 'boundary', use_train_data = True)

Output:

Feature engineering plays a crucial role in data modelling. We can check the feature importance using the following lines of codes.

plot_model(random_forest, plot = 'feature_all', use_train_data = True)

Output: 

Here we have discussed the visualization of results by a classification model. We can also plot various results from a regression model.

Sunday, March 13, 2022

sub : Google machine learning can help discover new antibodies, enzymes, foods



Alphabet (Google's parent company) subsidiary DeepMind has shown that Machine Learning (ML) can predict the shape of protein machinery with unprecedented accuracy, paving the way for researchers to discover new antibodies, enzymes and foods.

The shape of a protein provides very strong clues as to how the protein machinery can be used, but doesn't completely solve this question.

"So we wondered, can we predict what function a protein will perform?" Max Bileschi, a staff software engineer with Google Research's Brain Team, elaborated.


Google described in a Nature Biotechnology article how neural networks outperform state-of-the-art methods in reliably revealing the function of the protein universe's "dark matter."

DeepMind collaborated closely with internationally recognised experts at the EMBL's European Bioinformatics Institute (EMBL-EBI) to annotate 6.8 million additional protein regions in the 'Pfam v34.0 database' release, a global repository for protein families and their functions.

These annotations outnumber the database's expansion over the last decade, allowing the world's 2.5 million life-science researchers to discover new antibodies, enzymes, foods, and therapeutics.

For roughly one-third of all proteins found in all organisms
"Our ML models helped annotate 6.8 million more protein regions in the database," said the researchers.

The company has also launched an interactive scientific article where "you can play with our ML models -- getting results in real time, all in your web browser, with no setup required."

According to researchers, combining deep models with existing methods significantly improves remote homology detection, suggesting that the deep models learn complementary information.

This approach extends the coverage of Pfam by more than 9.5 per cent, exceeding additions made over the last decade, and predicts function for 360 human reference proteome proteins with no previous Pfam annotation.

"The results suggest that deep learning models will be a core component of future protein annotation tools."

Friday, March 4, 2022

How Artificial Intelligence Offers Real-World Opportunities For Indian Students



In the 2021 sci-fi drama Finch, there is an interesting scene where the legendary Tom Hanks, the film’s eponymous protagonist, can be seen talking to a robot named Jeff in a desert. “You see, you can already tell me how many rivets are in the Golden Gate Bridge. And how many miles of cables were used and how high it is. But it is not until you actually stand on it and see the beauty, and listen to the suspension cables singing in the wind…That’s experience, human experience,” Finch tells Jeff.

While the movie is set in the future, what Finch said about Jeff’s technical prowess has been true of the ease of technology for a while now. As for the human experience bit of it, I’m sure Jeff doesn't mind. Not yet, at least.


    www.dprg.co.in