Scale Presents

AI Futures

Piecing together our future through stories from the frontlines of AI

Thanks! We'll be in touch!
There was an error processing the request please try again
Episode 1Christian SzegedyStaff Research Scientist at Google AI
Christian Szegedy

Background

Christian Szegedy is a researcher at Google Brain. He discovered adversarial examples, invented BatchNorm, and his computer vision research has laid the foundations for modern convolutional neural network architectures.

Today, he's working on formal reasoning and dreams of creating an automated software engineer.

an illustrated dictionary

Szegedy’s Publications

an illustrated dictionary

Szegedy’s Publications

  • 2013

    Deep Neural Networks for Object Detection

    Advances in Neural Information Processing Systems

  • 2014

    Scalable Object Detection using Deep Neural Networks

    Computer Vision and Pattern Recognition

  • 2014

    Intriguing properties of neural networks

    International Conference on Learning Representations

  • 2014

    DeepPose: Human Pose Estimation via Deep Neural Networks

    Computer Vision and Pattern Recognition

  • 2015

    Training Deep Neural Networks on Noisy Labels with Bootstrapping

    ICLR

  • 2015

    Scalable, high-quality object detection

    arXiv

  • 2015

    Large Scale Business Discovery from Street Level Imagery

    arXiv

  • 2015

    Going Deeper with Convolutions

    Computer Vision and Pattern Recognition

  • 2015

    Explaining and Harnessing Adversarial Examples

    International Conference on Learning Representations

  • 2015

    Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

    Proceedings of The 32nd International Conference on Machine Learning

  • 2015

    SSD: Single Shot MultiBox Detector

    Proceedings of the European Conference on Computer Vision

  • 2016

    Rethinking the Inception Architecture for Computer Vision

    Proceedings of IEEE Conference on Computer Vision and Pattern Recognition

  • 2016

    Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning

    ICLR 2016 Workshop

  • 2016

    DeepMath - Deep Sequence Models for Premise Selection

    NIPS

  • 2017

    HolStep: a Machine Learning Dataset for Higher-Order Logic Theorem Proving

    ICLR

  • 2017

    Deep Network Guided Proof Search

    LPAR-21. 21st International Conference on Logic for Programming, Artificial Intelligence and Reasoning, EasyChair

Interview Highlights

01:01
Alex Wang

So, I wanted to start actually just by asking, when you were working on perception, I think starting six, seven years ago, why were you working on it? Why did you think it was interesting or important research?

Christian Szegedy

So, when I joined Google in 2010, so AI was not really a popular topic, or most people looked at very skeptical eyes, and my purpose in joining Google is to learn machine learning and AI. Actually, I was not so super much into perception per se. I was much more excited about learning machine learning in general, because my goal always was to design systems that are artificially intelligent. So, actually, reasoning was my original motivation, to learn machine learning, but at that time, vision was one of the most obvious outlets.

And I had the luck that I ... I managed to get into a group that ... who were ... who did research on computer vision.

Adversarial Examples

Adversarial examples are inputs to a neural network that result in an incorrect output from the network.

Read More on OpenAI
“king penguin” 100.0% confidence
+
imperceptible noise
=
“tripod” 71.0% confidence
05:27
Alex Wang

The title of your paper on adversarial examples was Intriguing Properties of Neural Networks. I mean, it was almost like you had discovered this curiosity, and it wasn't really framed in a ... in the context that they are now. Right now it's like safety is the primary context in which people talk about them.

Christian Szegedy

Yeah, so actually it was ... It's a stupid story because I had these adversarial examples lying in my drawer for more than a year, or almost two years. I discovered them in 2011, but then Wojciech came to me and wanted to write a paper with all kinds of ... So, I was too lazy to publish it and then Wojciech said, "Okay, you have this thing and we can combine with other stuff and then publish a joint paper with various intriguing properties." And as people started to bail out and they didn't put their own stuff because it was, like, not interesting enough or whatever, and then the paper mostly was about adversarial examples.

But if I would have known it beforehand, I would have just wrote a paper, like, with Wojciech alone, or maybe completely alone, and then I'd have ... like, just with the title of Adversarial Examples. So, actually, we planned with my manager to write a paper with a title like Blind Spots in Neural Networks a year earlier just on that topic, but we just ... I just was too lazy to do it.

Inception Module
Inception Module

Inception Module

The Inception module factorizes a plain convolution into a collection of spatially and channel-wise smaller convolutions. This factorization addresses different modes of spatial and channel-wise decomposability in its separate paths, and thus manages to greatly reduce both the amount of computation and the number of parameters without hurting expressiveness.

30:11
Alex Wang

What would you say are the other exciting or parencially underrated areas of research in AI right now?

Christian Szegedy

I think that goes back to another one of your questions, what should we do about AI and being misused?

So, a lot of people do lip service and say, "Yeah we do this and that." But I think it's, so how do you combat certain negative effects of machine learning. And what are those negative effects, because a lot of them are kind of invisible. So, how do people make decisions about our lives, so basically like type. For example, insurance companies, agencies and stuff like that. So, all this and this is just a small thing I don't really know everything, so as AI gets applied more and more then all these biases that go into the AI systems will affect everybody more and more. And I think that's something one should do much more research and take it much more seriously.

Subscribe To Receive Updates
Thanks! We'll be in touch!
There was an error processing the request please try again