Skip to content
Register Sign in Wishlist

Adversarial Machine Learning

CAD$91.95 (P)

  • Date Published: April 2019
  • availability: Available
  • format: Hardback
  • isbn: 9781107043466

CAD$ 91.95 (P)
Hardback

Add to cart Add to wishlist

Other available formats:
eBook


Looking for an examination copy?

This title is not currently available for examination. However, if you are interested in the title for your course we can consider offering an examination copy. To register your interest please contact collegesales@cambridge.org providing details of the course you are teaching.

Description
Product filter button
Description
Contents
Resources
Courses
About the Authors
  • Written by leading researchers, this complete introduction brings together all the theory and tools needed for building robust machine learning in adversarial environments. Discover how machine learning systems can adapt when an adversary actively poisons data to manipulate statistical inference, learn the latest practical techniques for investigating system security and performing robust data analysis, and gain insight into new approaches for designing effective countermeasures against the latest wave of cyber-attacks. Privacy-preserving mechanisms and the near-optimal evasion of classifiers are discussed in detail, and in-depth case studies on email spam and network security highlight successful attacks on traditional machine learning algorithms. Providing a thorough overview of the current state of the art in the field, and possible future directions, this groundbreaking work is essential reading for researchers, practitioners and students in computer security and machine learning, and those wanting to learn about the next stage of the cybersecurity arms race.

    • The first book to provide a state-of-the-art review of adversarial machine learning
    • Covers availability and integrity attacks, privacy-preserving mechanisms, near-optimal evasion of classifiers, and future directions for adversarial machine learning
    • Includes in-depth case studies on email spam and network security
    Read more

    Reviews & endorsements

    'Data Science practitioners tend to be unaware of how easy it is for adversaries to manipulate and misuse adaptive machine learning systems. This book demonstrates the severity of the problem by providing a taxonomy of attacks and studies of adversarial learning. It analyzes older attacks as well as recently discovered surprising weaknesses in deep learning systems. A variety of defenses are discussed for different learning systems and attack types that could help researchers and developers design systems that are more robust to attacks.' Richard Lippmann, Lincoln Laboratory, Massachusetts Institute of Technology

    'This is a timely book. Right time and right book, written with an authoritative but inclusive style. Machine learning is becoming ubiquitous. But for people to trust it, they first need to understand how reliable it is.' Fabio Roli, University of Cagliari, Italy

    See more reviews

    Customer reviews

    Not yet reviewed

    Be the first to review

    Review was not posted due to profanity

    ×

    , create a review

    (If you're not , sign out)

    Please enter the right captcha value
    Please enter a star rating.
    Your review must be a minimum of 12 words.

    How do you rate this item?

    ×

    Product details

    • Date Published: April 2019
    • format: Hardback
    • isbn: 9781107043466
    • length: 338 pages
    • dimensions: 254 x 178 x 19 mm
    • weight: 0.84kg
    • contains: 37 b/w illus. 8 tables
    • availability: Available
  • Table of Contents

    Part I. Overview of Adversarial Machine Learning:
    1. Introduction
    2. Background and notation
    3. A framework for secure learning
    Part II. Causative Attacks on Machine Learning:
    4. Attacking a hypersphere learner
    5. Availability attack case study: SpamBayes
    6. Integrity attack case study: PCA detector
    Part III. Exploratory Attacks on Machine Learning:
    7. Privacy-preserving mechanisms for SVM learning
    8. Near-optimal evasion of classifiers
    Part IV. Future Directions in Adversarial Machine Learning:
    9. Adversarial machine learning challenges.

  • Authors

    Anthony D. Joseph, University of California, Berkeley
    Anthony D. Joseph is a Chancellor's Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. He was formerly the Director of Intel Labs Berkeley.

    Blaine Nelson, Google
    Blaine Nelson is a Software Engineer in the Software Engineer in the Counter-Abuse Technologies (CAT) team at Google. He has previously worked at the University of Potsdam and the University of Tübingen.

    Benjamin I. P. Rubinstein, University of Melbourne
    Benjamin I. P. Rubinstein is a Senior Lecturer in Computing and Information Systems at the University of Melbourne. He has previously worked at Microsoft Research, Google Research, Yahoo! Research, Intel Labs Berkeley, and IBM Research.

    J. D. Tygar, University of California, Berkeley
    J. D. Tygar is a Professor of Computer Science and a Professor of Information Management at the University of California, Berkeley.

Sign In

Please sign in to access your account

Cancel

Not already registered? Create an account now. ×

Sorry, this resource is locked

Please register or sign in to request access. If you are having problems accessing these resources please email lecturers@cambridge.org

Register Sign in
Please note that this file is password protected. You will be asked to input your password on the next screen.

» Proceed

You are now leaving the Cambridge University Press website. Your eBook purchase and download will be completed by our partner www.ebooks.com. Please see the permission section of the www.ebooks.com catalogue page for details of the print & copy limits on our eBooks.

Continue ×

Continue ×

Continue ×

Find content that relates to you

Join us online

This site uses cookies to improve your experience. Read more Close

Are you sure you want to delete your account?

This cannot be undone.

Cancel

Thank you for your feedback which will help us improve our service.

If you requested a response, we will make sure to get back to you shortly.

×
Please fill in the required fields in your feedback submission.
×