https://openalex.org/T11689
This cluster of papers focuses on the robustness of deep learning models against adversarial attacks, exploring topics such as adversarial examples, security, uncertainty estimation, defenses, and verification. It delves into the challenges and potential solutions for ensuring the resilience of neural networks in the face of malicious inputs.
@prefix oasubfields: <https://openalex.org/subfields/> .
@prefix openalex: <https://lambdamusic.github.io/openalex-hacks/ontology/> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix skos: <http://www.w3.org/2004/02/skos/core#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
<https://openalex.org/T11689> a skos:Concept ;
rdfs:label "Adversarial Robustness in Deep Learning Models"@en ;
rdfs:isDefinedBy openalex: ;
owl:sameAs <https://en.wikipedia.org/wiki/Adversarial_machine_learning>,
<https://openalex.org/T11689> ;
skos:broader oasubfields:1702 ;
skos:definition "This cluster of papers focuses on the robustness of deep learning models against adversarial attacks, exploring topics such as adversarial examples, security, uncertainty estimation, defenses, and verification. It delves into the challenges and potential solutions for ensuring the resilience of neural networks in the face of malicious inputs."@en ;
skos:inScheme openalex: ;
skos:prefLabel "Adversarial Robustness in Deep Learning Models"@en ;
openalex:cited_by_count 487165 ;
openalex:works_count 29755 .