avatar
Kai Jia
PhD student
MIT CSAIL
Publications
Algorithmic Game Theory
On the Impact of Player Capability on Congestion Games
Yichen Yang *, Kai Jia *, Martin Rinard (*: Equal contributions)
TL;DR We propose to study a new kind of game where players have limited capability and analyze the impact of player capability on social welfare in congestion games.
We study the impact of player capability on social welfare in congestion games. We introduce a new game, the Distance-bounded Network Congestion game (DNC), as the basis of our study. DNC is a symmetric network congestion game with a bound on the number of edges each player can use. We show that DNC is PLS-complete in contrast to standard symmetric network congestion games which are in P. To model different player capabilities, we propose using programs in a Domain-Specific Language (DSL) to compactly represent player strategies. We define a player's capability as the maximum size of the programs they can use. We introduce two variants of DNC with accompanying DSLs representing the strategy spaces. We propose four capability preference properties to characterize the impact of player capability on social welfare at equilibrium. We then establish necessary and sufficient conditions for the four properties in the context of our DNC variants. Finally, we study a specific game where we derive exact expressions of the social welfare in terms of the capability bound. This provides examples where the social welfare at equilibrium increases, stays the same, or decreases as players become more capable.
Copy
@inproceedings{yang2022on,
  author="Yang, Yichen and Jia, Kai and Rinard, Martin",
  editor="Kanellopoulos, Panagiotis and Kyropoulou, Maria and Voudouris, Alexandros",
  title="On the Impact of Player Capability on Congestion Games",
  booktitle="Algorithmic Game Theory",
  year="2022",
  publisher="Springer International Publishing",
  address="Cham",
  pages="311--328",
  isbn="978-3-031-15714-1"
}
Neural Network Verification
Exploiting Verified Neural Networks via Floating Point Numerical Error
Kai Jia, Martin Rinard
TL;DR Practical complete verifiers (and many incomplete verifiers) for neural networks ignore floating point error. We show that this leads to wrong verification by presenting adversarial examples which are claimed to be nonexistent by verifiers.

Motivated by the need to reliably characterize the robustness of deep neural networks, researchers have developed verification algorithms for deep neural networks. Given a neural network, the verifiers aim to answer whether certain properties are guaranteed with respect to all inputs in a space. However, little attention has been paid to floating point numerical error in neural network verification.

We show that the negligence of floating point error is easily exploitable in practice. For a pretrained neural network, we present a method that efficiently searches inputs regarding which a complete verifier incorrectly claims the network is robust. We also present a method to construct neural network architectures and weights that induce wrong results of an incomplete verifier. Our results highlight that, to achieve practically reliable verification of neural networks, any verification system must accurately (or conservatively) model the effects of any floating point computations in the network inference or verification system.

Copy
@inproceedings{jia2021exploiting,
  author="Jia, Kai and Rinard, Martin",
  editor="Dr{\u{a}}goi, Cezara and Mukherjee, Suvam and Namjoshi, Kedar",
  title="Exploiting Verified Neural Networks via Floating Point Numerical Error",
  booktitle="Static Analysis",
  year="2021",
  publisher="Springer International Publishing",
  address="Cham",
  pages="191--205",
  isbn="978-3-030-88806-0"
}
Verifying Low-dimensional Input Neural Networks via Input Quantization
Kai Jia, Martin Rinard
TL;DR Enumerating input states in the discretized input space is more efficient and robust for verifying low-dimensional input neural networks than other sophisticated verifiers.

Deep neural networks are an attractive tool for compressing the control policy lookup tables in systems such as the Airborne Collision Avoidance System (ACAS). It is vital to ensure the safety of such neural controllers via verification techniques. The problem of analyzing ACAS Xu networks has motivated many successful neural network verifiers. These verifiers typically analyze the internal computation of neural networks to decide whether a property regarding the input/output holds. The intrinsic complexity of neural network computation renders such verifiers slow to run and vulnerable to floating-point error.

This paper revisits the original problem of verifying ACAS Xu networks. The networks take low-dimensional sensory inputs with training data extracted from a lookup table. We propose to prepend an input quantization layer to the network. Quantization allows efficient verification via input state enumeration, whose complexity is bounded by the size of the quantization space. Quantization is equivalent to nearest-neighbor interpolation at run time, which has been shown to provide acceptable accuracy for ACAS in simulation. Moreover, our technique can deliver exact verification results immune to floating-point error if we directly enumerate the network outputs on the target inference implementation or on an accurate simulation of the target implementation.

Copy
@inproceedings{jia2021verifying,
  author="Jia, Kai and Rinard, Martin",
  editor="Dr{\u{a}}goi, Cezara and Mukherjee, Suvam and Namjoshi, Kedar",
  title="Verifying Low-Dimensional Input Neural Networks via Input Quantization",
  booktitle="Static Analysis",
  year="2021",
  publisher="Springer International Publishing",
  address="Cham",
  pages="206--214",
  isbn="978-3-030-88806-0"
}
Efficient Exact Verification of Binarized Neural Networks
Kai Jia, Martin Rinard
TL;DR Efficient (thousands of times faster) verification of BNNs via improved SAT solving and training methods; methods for training robust BNNs.
Concerned with the reliability of neural networks, researchers have developed verification techniques to prove their robustness. Most verifiers work with real-valued networks. Unfortunately, the exact (complete and sound) verifiers face scalability challenges and provide no correctness guarantees due to floating point errors. We argue that Binarized Neural Networks (BNNs) provide comparable robustness and allow exact and significantly more efficient verification. We present a new system, EEV, for efficient and exact verification of BNNs. EEV consists of two parts: (i) a novel SAT solver that speeds up BNN verification by natively handling the reified cardinality constraints arising in BNN encodings; and (ii) strategies to train solver-friendly robust BNNs by inducing balanced layer-wise sparsity and low cardinality bounds, and adaptively cancelling the gradients. We demonstrate the effectiveness of EEV by presenting the first exact verification results for L-inf-bounded adversarial robustness of nontrivial convolutional BNNs on the MNIST and CIFAR10 datasets. Compared to exact verification of real-valued networks of the same architectures on the same tasks, EEV verifies BNNs hundreds to thousands of times faster, while delivering comparable verifiable accuracy in most cases.
Copy
@inproceedings{jia2020efficient,
 author = {Jia, Kai and Rinard, Martin},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},
 pages = {1782--1795},
 publisher = {Curran Associates, Inc.},
 title = {Efficient Exact Verification of Binarized Neural Networks},
 url = {https://proceedings.neurips.cc/paper/2020/file/1385974ed5904a438616ff7bdb3f7439-Paper.pdf},
 volume = {33},
 year = {2020}
}
Side Projects
SANM: A Symbolic Asymptotic Numerical Solver with Applications in Mesh Deformation
Kai Jia
TL;DR This paper presents a system to automatically solve high-dimensional nonlinear equations via higher-order approximation. This paper also contains a short introduction to the very elegant and interesting but lesser-known Asymptotic Numerical Method. This work is an extension of my course project for 6.839 Advanced Computer Graphics.

Solving nonlinear systems is an important problem. Numerical continuation methods efficiently solve certain nonlinear systems. The Asymptotic Numerical Method (ANM) is a powerful continuation method that usually converges faster than Newtonian methods. ANM explores the landscape of the function by following a parameterized solution curve approximated with a high-order power series. Although ANM has successfully solved a few graphics and engineering problems, prior to our work, applying ANM to new problems required significant effort because the standard ANM assumes quadratic functions, while manually deriving the power series expansion for nonquadratic systems is a tedious and challenging task.

This paper presents a novel solver, SANM, that applies ANM to solve symbolically represented nonlinear systems. SANM solves such systems in a fully automated manner. SANM also extends ANM to support many nonquadratic operators, including intricate ones such as singular value decomposition. Furthermore, SANM generalizes ANM to support the implicit homotopy form. Moreover, SANM achieves high computing performance via optimized system design and implementation.

We deploy SANM to solve forward and inverse elastic force equilibrium problems and controlled mesh deformation problems with a few constitutive models. Our results show that SANM converges faster than Newtonian solvers, requires little programming effort for new problems, and delivers comparable or better performance than a hand-coded, specialized ANM solver. While we demonstrate on mesh deformation problems, SANM is generic and potentially applicable to many tasks.

Copy
@article{jia2021sanm,
  title={{SANM}: A Symbolic Asymptotic Numerical Solver with Applications in Mesh Deformation},
  author={Jia, Kai},
  journal={{ACM} Transactions on Graphics (Proc. {SIGGRAPH})},
  publisher={ACM},
  year={2021},
  volume={40},
  number={4}
}
Posts