Pages

Tuesday, November 30, 2021

Andhra commercial tax dept to use of data analytics to pick tax evaders





tax evasion | Data analytics | Andhra Pradesh

The Commercial Taxes Department of Andhra Pradesh will use data analytics to pick out tax evaders in the state, a top official said.

The state Directorate of Revenue Intelligence has been tasked with the exercise of carrying out a sector-wise analysis to determine tax collection, plug the gaps and widen the tax base so as to meet the revenue targets.
Special Chief Secretary (Revenue-Commercial Taxes) Rajat Bhargava on Tuesday directed the DRI to work in tandem with the enforcement wing of the CT Department and use data analytics to identify tax evaders.

At a review meeting, the Special CS asked the authorities of the CT Department to strengthen the audit wing to check the compliance levels in the case of large taxpayers.

"We need to ramp up efforts to widen and deepen the state tax base by moving forward in a systematic manner. Focus should be on unearthing large-scale GST frauds," Rajat told the officials.

He also reviewed the state-related issues to be raised at the GST Council meeting, scheduled to be held on September 17 in Lucknow.

The state is yet to get over Rs 2,500 crore as GST compensation this year.

Besides, GST rates on liquor, petrol and diesel, solar power equipment and solar power plants would also be discussed at the Council meeting.

Chief Commissioner of State Tax S Ravi Shankar Narayan, DRI Special Commissioner Ch Rajeswar Reddy and other senior officials attended the review meeting.

Related topics please visit :

www.dprg.co.in

Monday, November 29, 2021

Using AI and old reports to understand new medical images



Getting a quick and accurate reading of an X-ray or some other medical images can be vital to a patient’s health and might even save a life. Obtaining such an assessment depends on the availability of a skilled radiologist and, consequently, a rapid response is not always possible. For that reason, says Ruizhi “Ray” Liao, a postdoc and a recent PhD graduate at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), “we want to train machines that are capable of reproducing what radiologists do every day.” Liao is first author of a new paper, written with other researchers at MIT and Boston-area hospitals, that is being presented this fall at MICCAI 2021, an international conference on medical image computing.



Although the idea of utilizing computers to interpret images is not new, the MIT-led group is drawing on an underused resource — the vast body of radiology reports that accompany medical images, written by radiologists in routine clinical practice — to improve the interpretive abilities of machine learning algorithms. The team is also utilizing a concept from information theory called mutual information — a statistical measure of the interdependence of two different variables — in order to boost the effectiveness of their approach.



Here’s how it works: First, a neural network is trained to determine the extent of a disease, such as pulmonary edema, by being presented with numerous X-ray images of patients’ lungs, along with a doctor’s rating of the severity of each case. That information is encapsulated within a collection of numbers. A separate neural network does the same for text, representing its information in a different collection of numbers. A third neural network then integrates the information between images and text in a coordinated way that maximizes the mutual information between the two datasets. “When the mutual information between images and text is high, that means that images are highly predictive of the text and the text is highly predictive of the images,” explains MIT Professor Polina Golland, a principal investigator at CSAIL.

Read more at:

www.dprg.co.in



Saturday, November 27, 2021

Deep learning helps predict new drug combinations to fight Covid-19


The existential threat of Covid-19 has highlighted an acute need to develop working therapeutics against emerging health concerns. One of the luxuries deep learning has afforded us is the ability to modify the landscape as it unfolds — so long as we can keep up with the viral threat, and access the right data. 

As with all new medical maladies, oftentimes the data need time to catch up, and the virus takes no time to slow down, posing a difficult challenge as it can quickly mutate and become resistant to existing drugs. This led scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Jameel Clinic for Machine Learning in Health to ask: How can we identify the right synergistic drug combinations for the rapidly spreading SARS-CoV-2? 

Typically, data scientists use deep learning to pick out drug combinations with large existing datasets for things like cancer and cardiovascular disease, but, understandably, they can’t be used for new illnesses with limited data.

Without the necessary facts and figures, the team needed a new approach: a neural network that wears two hats. Since drug synergy often occurs through inhibition of biological targets (like proteins or nucleic acids), the model jointly learns drug-target interaction and drug-drug synergy to mine new combinations. The drug-target predictor models the interaction between a drug and a set of known biological targets that are related to the chosen disease. The target-disease association predictor learns to understand a drug's antiviral activity, which means determining the virus yield in infected tissue cultures. Together, they can predict the synergy of two drugs. 

Two new drug combinations were found using this approach: remdesivir (currently approved by the FDA to treat Covid-19) and reserpine, as well as remdesivir and IQ-1S, which, in biological assays, proved powerful against the virus. The study has been published in the Proceedings of the National Academy of Sciences. 


Read more at:

www.dprg.co.in


Tuesday, November 23, 2021

3 Questions: Kalyan Veeramachaneni on hurdles preventing fully automated machine learning

 Researchers hope more user-friendly machine-learning systems will enable nonexperts to analyze big data — but can such systems ever be completely autonomous?



The proliferation of big data across domains, from banking to health care to environmental monitoring, has spurred increasing demand for machine learning tools that help organizations make decisions based on the data they gather.

That growing industry demand has driven researchers to explore the possibilities of automated machine learning (AutoML), which seeks to automate the development of machine learning solutions in order to make them accessible for nonexperts, improve their efficiency, and accelerate machine learning research. For example, an AutoML system might enable doctors to use their expertise interpreting electroencephalography (EEG) results to build a model that can predict which patients are at higher risk for epilepsy — without requiring the doctors to have a background in data science.

Yet, despite more than a decade of work, researchers have been unable to fully automate all steps in the machine learning development process. Even the most efficient commercial AutoML systems still require a prolonged back-and-forth between a domain expert, like a marketing manager or mechanical engineer, and a data scientist, making the process inefficient.

Kalyan Veeramachaneni, a principal research scientist in the MIT Laboratory for Information and Decision Systems who has been studying AutoML since 2010, has co-authored a paper in the journal ACM Computing Surveys that details a seven-tiered schematic to evaluate AutoML tools based on their level of autonomy.

A system at level zero has no automation and requires a data scientist to start from scratch and build models by hand, while a tool at level six is completely automated and can be easily and effectively used by a nonexpert. Most commercial systems fall somewhere in the middle.

Veeramachaneni spoke with MIT News about the current state of AutoML, the hurdles that prevent truly automatic machine learning systems, and the road ahead for AutoML researchers.


Read more at: 

www.dprg.co.in


Saturday, November 20, 2021

Deep learning helps predict traffic crashes before they happen

 A deep model was trained on historical crash data, road maps, satellite imagery, and GPS to enable high-resolution crash maps that could lead to safer roads.

Today's world is one big maze, connected by layers of concrete and asphalt that allow us to travel by car. Many of our road-related advancements — GPS allows us to fire fewer neurons thanks to mapping apps, cameras alert us to potentially costly scrapes and scratches, and electric autonomous cars have lower fuel costs — have not yet caught up with our safety measures. To safely get from point A to point B, we still rely on a steady diet of traffic signals, trust, and the steel that surrounds us.

Scientists from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Qatar Center for Artificial Intelligence collaborated to anticipate the uncertainty inherent in crashes.






www.dprg.co.in


Tuesday, November 16, 2021

These neural networks know what they’re doing

 A certain type of artificial intelligence agent can learn the cause-and-effect basis of a navigation task during training.


Neural networks can learn to solve a wide range of problems, from recognizing cats in photos to steering a self-driving car. However, it is unclear whether these powerful pattern-recognition algorithms truly understand the tasks they are performing.


For example, instead of learning to detect lanes and focus on the road's horizon, a neural network tasked with keeping a self-driving car in its lane might learn to do so by watching the bushes at the side of the road.

MIT researchers have demonstrated that a specific type of neural network can learn the true cause-and-effect structure of the navigation task it is being trained to perform. Because these networks can understand the task directly from visual data, they should outperform other neural networks when navigating in a complex environment, such as one with dense trees or rapidly changing weather conditions.

Read more at:


www.dprg.co.in


Saturday, November 13, 2021

Accelerating the discovery of new materials for 3D printing

 








A new machine-learning system costs less, generates less waste, and can be more innovative than manual discovery methods.

The growing popularity of 3D printing for manufacturing all sorts of items, from customized medical devices to affordable homes, has created more demand for new 3D printing materials designed for very specific uses.

To cut down on the time it takes to discover these new materials, researchers at MIT have developed a data-driven process that uses machine learning to optimize new 3D printing materials with multiple characteristics, like toughness and compression strength.

By streamlining materials development, the system lowers costs and lessens the environmental impact by reducing the amount of chemical waste. The machine learning algorithm could also spur innovation by suggesting unique chemical formulations that human intuition might miss. 

“Materials development is still very much a manual process. A chemist goes into a lab, mixes ingredients by hand, makes samples, tests them, and comes to a final formulation. But rather than having a chemist who can only do a couple of iterations over a span of days, our system can do hundreds of iterations over the same time span,” says Mike Foshey, a mechanical engineer and project manager in the Computational Design and Fabrication Group (CDFG) of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and co-lead author of the paper.

Additional authors include co-lead author Timothy Erps, a technical associate in CDFG; Mina Konaković Luković, a CSAIL postdoc; Wan Shou, a former MIT postdoc who is now an assistant professor at the University of Arkansas; senior author Wojciech Matusik, professor of electrical engineering and computer science at MIT; and Hanns Hagen Geotzke, Herve Dietsch, and Klaus Stoll of BASF. The research was published today in Science Advances.

Optimizing discovery

In the system the researchers developed, an optimization algorithm performs much of the trial-and-error discovery process.

A material developer selects a few ingredients, inputs details on their chemical compositions into the algorithm, and defines the mechanical properties the new material should have. Then the algorithm increases and decreases the amounts of those components (like turning knobs on an amplifier) and checks how each formula affects the material’s properties, before arriving at the ideal combination.

Then the developer mixes, processes, and tests that sample to find out how the material actually performs. The developer reports the results to the algorithm, which automatically learns from the experiment and uses the new information to decide on another formulation to test.

“We think, for a number of applications, this would outperform the conventional method because you can rely more heavily on the optimization algorithm to find the optimal solution. You wouldn’t need an expert chemist on hand to preselect the material formulations,” Foshey says.

The researchers have created a free, open-source materials optimization platform called AutoOED that incorporates the same optimization algorithm. AutoOED is a full software package that also allows researchers to conduct their own optimization.

Read more at:

www.dprg.co.in


Tuesday, November 9, 2021

Scaler Academy launches new course in data science and machine learning

 Scaler Academy launches new course in data science and machine learning

Ed-tech startup Scaler Academy today said that it has launched a new program for engineers in data science and machine learning (ML).

The company said that the program had been designed based on a survey conducted by the company with around 100 data scientists working in leading tech and product firms worldwide.

The course will have a foundation of data structures and algorithms, followed by mathematics, data mining, statistical analysis, data science, machine learning, 
deep
 learning and big data.


Saturday, November 6, 2021

Attention-based deep neural network increases detection capability in sonar systems

 

The Deep-learning technique detects multiple ship targets better than conventional networks.


n underwater acoustics, deep learning is gaining traction in improving sonar systems to detect ships and submarines in distress or in restricted waters. However, noise interference from the complex marine environment becomes a challenge when attempting to detect targeted ship-radiated sounds.

In the Journal of the Acoustical Society of America, published by the Acoustical Society of America through AIP Publishing, researchers in China and the United States explore an attention-based deep neural network (ABNN) to tackle this problem.

"We found the ABNN was highly accurate in a target recognition, exceeding a conventional deep neural network, particularly when using limited single-target data to detect multiple targets," co-author Qunyan Ren said.

Deep learning is a machine-learning method that uses artificial neural networks inspired by the human brain to recognize patterns. Each layer of artificial neurons, or nodes, learns a distinct set of features based on the information contained in the previous layer.

ABNN uses an attention module to mimic elements in the cognitive process that enable us to focus on the most important parts of an image, language, or other pattern and tune out the rest. This is accomplished by adding more weight to certain nodes to enhance specific pattern elements in the machine-learning process.

Incorporating an ABNN system in sonar equipment for targeted ship detection, the researchers tested two ships in a shallow, 135-square-mile area of the South China Sea. They compared their results with a typical deep neural network (DNN). Radar and other equipment were used to determine more than 17 interfering vessels in the experimental area.


for more info : 

www.dprg.co.in


Tuesday, November 2, 2021

Artificial intelligence may be set to reveal climate-change tipping points.

 

Researchers are developing artificial intelligence that could assess climate change tipping points. The deep learning algorithm could act as an early warning system against runaway climate change.

Chris Bauch, a professor of applied mathematics at the University of Waterloo, is co-author of a recent research paper reporting results on the new deep-learning algorithm. The research looks at thresholds beyond which rapid or irreversible change happens in a system, Bauch said.

"We found that the new algorithm was able to not only predict the tipping points more accurately than existing approaches but also provide information about what type of state lies beyond the tipping point," Bauch said. "Many of these tipping points are undesirable, and we'd like to prevent them if we can."Some tipping points that are often associated with run-away climate change include melting Arctic permafrost, which could release mass amounts of methane and spur further rapid heating; breakdown of oceanic current systems, which could lead to almost immediate changes in weather patterns; or ice sheet disintegration, which could lead to rapid sea-level change.

The innovative approach with this AI, according to the researchers, is that it was programmed to learn not just about one type of tipping point but the characteristics of tipping points generally.


For more details: