The Northeastern Interactive Clustering Engine (NICE) is an open source machine learning visualization tool that allows researchers to interactively analyze data sets. It uses two machine learning algorithms, K-Means clustering and spectral clustering, to provide insights into relationships within large, multi-dimensional data. The software is engineered to accelerate these algorithms using both CPUs and GPUs for improved performance. Future work will focus on additional algorithm implementations and a recommendation system to guide users.
The document discusses GPU computing for machine learning. It notes that machine learning algorithms are computationally expensive and their requirements increase with data size. GPUs provide significant performance gains over CPUs for parallel problems like machine learning. Many machine learning algorithms have been implemented on GPUs, achieving speedups of 1-2 orders of magnitude. However, most GPU implementations are closed-source. Open-source implementations provide advantages like reproducibility and fair algorithm comparisons.
The 1st workshop on engineering processes and practices for quantum software ...Mahdi_Fahmideh
This document summarizes a presentation on developing quantum software engineering practices for quantum algorithm development for multiphysics simulations. It discusses Quanscient's work on developing quantum-native simulation algorithms like the Quantum Lattice-Boltzmann Method. It notes that quantum software engineering has some peculiarities due to the non-deterministic nature of quantum computations and immaturity of quantum hardware. It also describes an ongoing case study developing a flexible Quantum Lattice-Boltzmann module using an API and discusses some challenges of applying agile practices to quantum software development.
Deterministic Machine Learning with MLflow and mlf-coreDatabricks
Machine learning suffers from a reproducibility crisis. Deterministic machine learning is incredibly important for academia to verify papers, but also for developers to debug, audit and regress models.
Due to the various reasons for non-deterministic ML, especially when GPUs are in play, I conducted several experiments and identified all causes and the corresponding solutions (if available).
Willump: Optimizing Feature Computation in ML InferenceDatabricks
Systems for performing ML inference are increasingly important, but are far slower than they could be because they use techniques designed for conventional data serving workloads, neglecting the statistical nature of ML inference. As an alternative, this talk presents Willump, an optimizer for ML inference.
The Northeastern Interactive Clustering Engine (NICE) is an open source machine learning visualization tool that allows researchers to interactively analyze data sets. It uses two machine learning algorithms, K-Means clustering and spectral clustering, to provide insights into relationships within large, multi-dimensional data. The software is engineered to accelerate these algorithms using both CPUs and GPUs for improved performance. Future work will focus on additional algorithm implementations and a recommendation system to guide users.
The document discusses GPU computing for machine learning. It notes that machine learning algorithms are computationally expensive and their requirements increase with data size. GPUs provide significant performance gains over CPUs for parallel problems like machine learning. Many machine learning algorithms have been implemented on GPUs, achieving speedups of 1-2 orders of magnitude. However, most GPU implementations are closed-source. Open-source implementations provide advantages like reproducibility and fair algorithm comparisons.
The 1st workshop on engineering processes and practices for quantum software ...Mahdi_Fahmideh
This document summarizes a presentation on developing quantum software engineering practices for quantum algorithm development for multiphysics simulations. It discusses Quanscient's work on developing quantum-native simulation algorithms like the Quantum Lattice-Boltzmann Method. It notes that quantum software engineering has some peculiarities due to the non-deterministic nature of quantum computations and immaturity of quantum hardware. It also describes an ongoing case study developing a flexible Quantum Lattice-Boltzmann module using an API and discusses some challenges of applying agile practices to quantum software development.
Deterministic Machine Learning with MLflow and mlf-coreDatabricks
Machine learning suffers from a reproducibility crisis. Deterministic machine learning is incredibly important for academia to verify papers, but also for developers to debug, audit and regress models.
Due to the various reasons for non-deterministic ML, especially when GPUs are in play, I conducted several experiments and identified all causes and the corresponding solutions (if available).
Willump: Optimizing Feature Computation in ML InferenceDatabricks
Systems for performing ML inference are increasingly important, but are far slower than they could be because they use techniques designed for conventional data serving workloads, neglecting the statistical nature of ML inference. As an alternative, this talk presents Willump, an optimizer for ML inference.
Technologies comparison: Genuino 101 vs uTensor AndreaNapoletani
The aim of this comparison is to compare these two technologies used in Machine Learning applications: Genuino 101 vs uTensor. In particular we are going to compare the Intel Curie Module (Pattern Matching Engine) and TensorFlow.
If we could only predict the future of the software industry, we could make better investments and decisions. We could waste less resources on technology and processes we know will not last, or at least be conscious in our decisions to choose solutions with a limited life time. It turns out that for data engineering, we can predict the future, because it has already happened. Not in our workplace, but at a few leading companies that are blazing ahead. It has also already happened in the neighbouring field of software engineering, which is two decades ahead of data engineering regarding process maturity. In this presentation, we will glimpse into the future of data engineering. Data engineering has gone from legacy data warehouses with stored procedures, to big data with Hadoop and data lakes, on to a new form of modern data warehouses and low code tools aka "the modern data stack". Where does it go from here? We will look at the points where data leaders differ from the crowd and combine with observations on how software engineering has evolved, to see that it points towards a new, more industrialised form of data engineering - "data factory engineering".
This document provides an introduction to big data and data science concepts. It discusses how data is now plentiful and inexpensive to store compared to the past. It outlines some of the challenges of big data like ingesting, organizing, interpreting large datasets as well as overfitting. Machine learning models discussed include neural networks, convolutional neural networks, and Word2Vec for natural language processing. The document provides an overview of key statistical concepts in evaluating models like training, validating, testing and comparing different performance metrics.
Reading: "Pi in the sky: Calculating a record-breaking 31.4 trillion digits o...Kento Aoyama
(Journal Club at AIS Lab. on April 22, 2019)
Reading: “Pi in the sky: Calculating a record-breaking 31.4 trillion digits of Archimedes’ constant on Google Cloud”
BKK16-203 Irq prediction or how to better estimate idle timeLinaro
Review design. The current approach to predict the idle time duration is based on statistics on the previous idle time durations. The presentation will show the weaknesses of this approach and how by tracking the irq behavior we predict the next event to guess estimate the idle duration.
Slide for study session given by Dr. Enrico Rinaldi at Arithmer inc.
It is a summary of recent methods for real-time instance segmentation "YOLACT", which is especially useful in robotics.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
Reinforcement Learning (RL) approaches to deal with finding an optimal reward based policy to act in an environment (Charla en Inglés)
However, what has led to their widespread use is its combination with deep neural networks (DNN) i.e., deep reinforcement learning (Deep RL). Recent successes on not only learning to play games but also superseding humans in it and academia-industry research collaborations like for manipulation of objects, locomotion skills, smart grids, etc. have surely demonstrated their case on a wide variety of challenging tasks.
With application spanning across games, robotics, dialogue, healthcare, marketing, energy and many more domains, Deep RL might just be the power that drives the next generation of Artificial Intelligence (AI) agents!
Talk @ APT Group, University of Manchester, 06 August 2014
Abstract:
Nowadays HPC systems, such as those in the Top500, are equipped with a range of different processors, from multi-core CPUs to GPUs. Programming them can be a tough job, specially if we want to squeeze every last FLOPs of performance out of them.
As a Phd Student, I am now doing a brief research visit in the APT group, working in topics related to the programmability and efficient use of GPUs and many-core coprocessors. In particular, I am implementing a large database operation using OpenCL in these state-of-the-art systems. In this talk I will summarize my work in Manchester and discuss the future work in this topic.
Artificial Intelligence in practice - Gerbert Kaandorp - Codemotion Amsterdam...Codemotion
In this talk Gerbert will give an overview of Artificial Intelligence, outline the current state of the art in research and explain what it takes to actually do an AI project. Using practical cases and tools he will give you insight in the phases of an AI project and explain some of the problems you might encounter along the way and how you might be able to solve them.
The document provides a general introduction to artificial intelligence (AI), machine learning (ML), deep learning (DL), and data science (DS). It defines each term and describes their relationships. Key points include:
- AI is the ability of computers to mimic human cognition and intelligence.
- ML is an approach to achieve AI by having computers learn from data without being explicitly programmed.
- DL uses neural networks for ML, especially with unstructured data like images and text.
- DS involves extracting insights from data through scientific methods. It is a multidisciplinary field that uses techniques from ML, DL, and statistics.
This document reviews GPU computation in bioinspired algorithms. It begins with an introduction to GPUs and their suitability for parallel computation. It then discusses GPU programming models and memory models. The bulk of the document reviews how different bioinspired methods like genetic algorithms, neural networks, and others have been implemented on GPUs. It finds that most use the GPU to accelerate fitness evaluation in parallel. The document concludes that GPU approaches can provide speedups of thousands of times over sequential CPU implementations.
Kaggle is one of the largest online communities for data scientists specifically known for their competitions where participants aim to solve data science challenges. Kaggle has a long history of varying types of competitions from different areas such as medicine, finance, scientific research, or sports focusing on different types of data and prediction problems such as tabular data, time series, NLP, or computer vision.
As the complexity of choosing optimised and task specific steps and ML models is often beyond non-experts, the rapid growth of machine learning applications has created a demand for off-the-shelf machine learning methods that can be used easily and without expert knowledge. We call the resulting research area that targets progressive automation of machine learning AutoML.
Although it focuses on end users without expert knowledge, AutoML also offers new tools to machine learning experts, for example to:
1. Perform architecture search over deep representations
2. Analyse the importance of hyperparameters.
The Molecular Programming Project (MPP) is a collaboration between Caltech and University of Washington aimed at developing the theory and practice of programming molecular systems. The goals of the MPP are to: 1) create programming languages and compilers for molecular programming; 2) develop a theoretical framework for analyzing and designing molecular programs; 3) experimentally validate their compilers and theory with larger molecular programs than currently possible; 4) apply their technologies to real-world applications; and 5) train a new generation of molecular programmers.
Anurag Awasthi - Machine Learning applications for CloudStackShapeBlue
While Machine learning and data mining has had profound impact on how we model applications and use data for better product consumption, there is scope for extending prediction algorithms to lower levels as well. Some useful applications of machine learning in ACS could be exploring better resource allocation that is aware of usage statistics, predicting faults, load balancing, etc. In this talk we will * take a broad overview of what Machine Learning/Data mining is and how it is being used in today's tech ecosystemn* explore ways in which we can make ACS more efficientn* discuss some recent advancements in how ML can benefit datacenters from research community
Compressing of Magnetic Resonance Images with Cudaijtsrd
One of the most important areas that use image processing is the health sector. In order to detect some diseases, the need to visualize a certain part of the patients body using medical imaging devices has emerged. This field in the health sector is the Radiology department. MR, Tomography, Ultrasound, X-ray, Echocardiography. Because of the importance of time in the health sector, GPU technologies are a technology that should be used in hospitals. Medical MRI images showed that the unused areas NON-ROI occupy a large area and this unnecessary area in the image could reduce the image size significantly. In this method developed with CUDA, the ROI Region of Interest region within the Medical MR images is determined by sending a 3X3 Kirsch filter matrix to the CUDA cores, and the NON-ROI region is extracted with CUDA from the image. It is then compressed with a new compression method developed. As a result of this method The parallel application with CUDA solves the problem 34 times faster than the sequential application for each image, while the compressed image takes up 90 less space than the original image size it takes 40 less space than the compressed size of the original image. Mahmut Ünver | Atilla Ergüzen "Compressing of Magnetic Resonance Images with Cuda" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-1 , December 2018, URL: http://www.ijtsrd.com/papers/ijtsrd20209.pdf
http://www.ijtsrd.com/computer-science/parallel-computing/20209/compressing-of-magnetic-resonance-images-with-cuda/mahmut-ünver
Technologies comparison: Genuino 101 vs uTensor AndreaNapoletani
The aim of this comparison is to compare these two technologies used in Machine Learning applications: Genuino 101 vs uTensor. In particular we are going to compare the Intel Curie Module (Pattern Matching Engine) and TensorFlow.
If we could only predict the future of the software industry, we could make better investments and decisions. We could waste less resources on technology and processes we know will not last, or at least be conscious in our decisions to choose solutions with a limited life time. It turns out that for data engineering, we can predict the future, because it has already happened. Not in our workplace, but at a few leading companies that are blazing ahead. It has also already happened in the neighbouring field of software engineering, which is two decades ahead of data engineering regarding process maturity. In this presentation, we will glimpse into the future of data engineering. Data engineering has gone from legacy data warehouses with stored procedures, to big data with Hadoop and data lakes, on to a new form of modern data warehouses and low code tools aka "the modern data stack". Where does it go from here? We will look at the points where data leaders differ from the crowd and combine with observations on how software engineering has evolved, to see that it points towards a new, more industrialised form of data engineering - "data factory engineering".
This document provides an introduction to big data and data science concepts. It discusses how data is now plentiful and inexpensive to store compared to the past. It outlines some of the challenges of big data like ingesting, organizing, interpreting large datasets as well as overfitting. Machine learning models discussed include neural networks, convolutional neural networks, and Word2Vec for natural language processing. The document provides an overview of key statistical concepts in evaluating models like training, validating, testing and comparing different performance metrics.
Reading: "Pi in the sky: Calculating a record-breaking 31.4 trillion digits o...Kento Aoyama
(Journal Club at AIS Lab. on April 22, 2019)
Reading: “Pi in the sky: Calculating a record-breaking 31.4 trillion digits of Archimedes’ constant on Google Cloud”
BKK16-203 Irq prediction or how to better estimate idle timeLinaro
Review design. The current approach to predict the idle time duration is based on statistics on the previous idle time durations. The presentation will show the weaknesses of this approach and how by tracking the irq behavior we predict the next event to guess estimate the idle duration.
Slide for study session given by Dr. Enrico Rinaldi at Arithmer inc.
It is a summary of recent methods for real-time instance segmentation "YOLACT", which is especially useful in robotics.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
Reinforcement Learning (RL) approaches to deal with finding an optimal reward based policy to act in an environment (Charla en Inglés)
However, what has led to their widespread use is its combination with deep neural networks (DNN) i.e., deep reinforcement learning (Deep RL). Recent successes on not only learning to play games but also superseding humans in it and academia-industry research collaborations like for manipulation of objects, locomotion skills, smart grids, etc. have surely demonstrated their case on a wide variety of challenging tasks.
With application spanning across games, robotics, dialogue, healthcare, marketing, energy and many more domains, Deep RL might just be the power that drives the next generation of Artificial Intelligence (AI) agents!
Talk @ APT Group, University of Manchester, 06 August 2014
Abstract:
Nowadays HPC systems, such as those in the Top500, are equipped with a range of different processors, from multi-core CPUs to GPUs. Programming them can be a tough job, specially if we want to squeeze every last FLOPs of performance out of them.
As a Phd Student, I am now doing a brief research visit in the APT group, working in topics related to the programmability and efficient use of GPUs and many-core coprocessors. In particular, I am implementing a large database operation using OpenCL in these state-of-the-art systems. In this talk I will summarize my work in Manchester and discuss the future work in this topic.
Artificial Intelligence in practice - Gerbert Kaandorp - Codemotion Amsterdam...Codemotion
In this talk Gerbert will give an overview of Artificial Intelligence, outline the current state of the art in research and explain what it takes to actually do an AI project. Using practical cases and tools he will give you insight in the phases of an AI project and explain some of the problems you might encounter along the way and how you might be able to solve them.
The document provides a general introduction to artificial intelligence (AI), machine learning (ML), deep learning (DL), and data science (DS). It defines each term and describes their relationships. Key points include:
- AI is the ability of computers to mimic human cognition and intelligence.
- ML is an approach to achieve AI by having computers learn from data without being explicitly programmed.
- DL uses neural networks for ML, especially with unstructured data like images and text.
- DS involves extracting insights from data through scientific methods. It is a multidisciplinary field that uses techniques from ML, DL, and statistics.
This document reviews GPU computation in bioinspired algorithms. It begins with an introduction to GPUs and their suitability for parallel computation. It then discusses GPU programming models and memory models. The bulk of the document reviews how different bioinspired methods like genetic algorithms, neural networks, and others have been implemented on GPUs. It finds that most use the GPU to accelerate fitness evaluation in parallel. The document concludes that GPU approaches can provide speedups of thousands of times over sequential CPU implementations.
Kaggle is one of the largest online communities for data scientists specifically known for their competitions where participants aim to solve data science challenges. Kaggle has a long history of varying types of competitions from different areas such as medicine, finance, scientific research, or sports focusing on different types of data and prediction problems such as tabular data, time series, NLP, or computer vision.
As the complexity of choosing optimised and task specific steps and ML models is often beyond non-experts, the rapid growth of machine learning applications has created a demand for off-the-shelf machine learning methods that can be used easily and without expert knowledge. We call the resulting research area that targets progressive automation of machine learning AutoML.
Although it focuses on end users without expert knowledge, AutoML also offers new tools to machine learning experts, for example to:
1. Perform architecture search over deep representations
2. Analyse the importance of hyperparameters.
The Molecular Programming Project (MPP) is a collaboration between Caltech and University of Washington aimed at developing the theory and practice of programming molecular systems. The goals of the MPP are to: 1) create programming languages and compilers for molecular programming; 2) develop a theoretical framework for analyzing and designing molecular programs; 3) experimentally validate their compilers and theory with larger molecular programs than currently possible; 4) apply their technologies to real-world applications; and 5) train a new generation of molecular programmers.
Anurag Awasthi - Machine Learning applications for CloudStackShapeBlue
While Machine learning and data mining has had profound impact on how we model applications and use data for better product consumption, there is scope for extending prediction algorithms to lower levels as well. Some useful applications of machine learning in ACS could be exploring better resource allocation that is aware of usage statistics, predicting faults, load balancing, etc. In this talk we will * take a broad overview of what Machine Learning/Data mining is and how it is being used in today's tech ecosystemn* explore ways in which we can make ACS more efficientn* discuss some recent advancements in how ML can benefit datacenters from research community
Compressing of Magnetic Resonance Images with Cudaijtsrd
One of the most important areas that use image processing is the health sector. In order to detect some diseases, the need to visualize a certain part of the patients body using medical imaging devices has emerged. This field in the health sector is the Radiology department. MR, Tomography, Ultrasound, X-ray, Echocardiography. Because of the importance of time in the health sector, GPU technologies are a technology that should be used in hospitals. Medical MRI images showed that the unused areas NON-ROI occupy a large area and this unnecessary area in the image could reduce the image size significantly. In this method developed with CUDA, the ROI Region of Interest region within the Medical MR images is determined by sending a 3X3 Kirsch filter matrix to the CUDA cores, and the NON-ROI region is extracted with CUDA from the image. It is then compressed with a new compression method developed. As a result of this method The parallel application with CUDA solves the problem 34 times faster than the sequential application for each image, while the compressed image takes up 90 less space than the original image size it takes 40 less space than the compressed size of the original image. Mahmut Ünver | Atilla Ergüzen "Compressing of Magnetic Resonance Images with Cuda" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-1 , December 2018, URL: http://www.ijtsrd.com/papers/ijtsrd20209.pdf
http://www.ijtsrd.com/computer-science/parallel-computing/20209/compressing-of-magnetic-resonance-images-with-cuda/mahmut-ünver
Semelhante a Scientific Machine Learning using SciML.pdf (20)
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
2. Aniket Kulkarni
● MSc in Computational Fluid Dynamics from
Cranfield University
● Presales & application development @ Ansys
● Lead Data Scientist @ Aligned Automation
About Me
lnk.bio/ani.kulkarni
4. Virtual Modelling
● What is virtual modelling?
○ Wave equation :
○ Heat equation :
○ Existing tools - Matlab, Ansys etc.
● The need
○ Cost reduction
○ Visualization
● Need for open source tools
12. Summary
● Role of simulations in physical systems
● Advancements in simulations using Machine Learning
● Open source software in simulations
● SciML tools for scientific machine learning