Seybold Report ISSN: 1533-9211

Abstract

INVESTİGATİON ON DETECTİON OF DATA POİSONİNG ATTACKS: MOST POSSİBLE DEFENCES AND COUNTER MEASURES


Kireet Muppavaram
Assistant Professor, Dept of CSE, School of Technology, GITAM DEEMED TO BE University Hyderabad, Email : kmuppava@gitam.edu

Aparna Shivampeta
Assistant Professor, Dept of CSE, School of Technology, GITAM DEEMED TO BE University Hyderabad. Email : ashivampeta@gitam.edu

Hyma Biruduraju
Assistant Professor, Dept of CSE, Gurunanak Institutions Technical campus, Hyderabad Email : hymaomkaram@gmail.com

Vishwesh Nagamalla
Assistant Professor, Dept of CSE, Sreenidhi Institute of Science and Technology, Hyderabad Email : vishwesh2010@gmail.com

Ishmatha begum
Assistant Professor, Dept of CSE, Gurunanak Institutions Technical campus, Hyderabad Email : ishmathabegum.gnit@gniindia.org


Vol 17, No 09 ( 2022 )   |  DOI: 10.5281/zenodo.7106361   |   Licensing: CC 4.0   |   Pg no:1261-1268   |   Published on: 22-09-2022



Abstract
Machine learning has become one of the most prominent applications in various fields for the development of high end systems. This trend of machine learning applications usage made the attackers to choose the machine learning applications models and induce different type of attacks like data poisoning attacks, adversarial attacks, Obfuscation Attacks, Side channel attacks, Model Inversion attacks, MITM attacks. It is very essential to provide security to the machine learning models by protecting the integrity, confidentiality and availability of the training data, testing data of machine learning models. Through our study we found that the data poisoning attacks are the majority of the attacks attempted by the attackers on machine learning systems. In this paper we carefully analysed data poisoning attacks from the existing models and by our investigation we proposed the most possible defences and countermeasures to Data Poisoning attacks.


Keywords:
Machine learning, Data poisoning attacks, integrity, confidentiality, availability, brute force attacks.



Download Full Article PDF


Back to Current Issue Page