avatar
Kai Jia
PhD student
MIT CSAIL
Publications & Preprints
Limited-capability Games
Limited-perception Games
Kai Jia, Martin Rinard
TL;DR We study a new game model where players make decisions based on individualized inaccurate perceptions of the payoff functions. My PhD thesis (available via MIT DSpace) extends this work.
We study rational agents with different perception capabilities in strategic games. We focus on a class of one-shot limited-perception games. These games extend simultaneous-move normal-form games by presenting each player with an individualized perception of all players' payoff functions. The accuracy of a player's perception is determined by the player's capability level. Capability levels are countable and totally ordered, with a higher level corresponding to a more accurate perception. We study the rational behavior of players in these games and formalize relevant equilibria conditions. In contrast to equilibria in conventional bimatrix games, which can be represented by a pair of mixed strategies, in our limited perception games a higher-order response function captures how the lower-capability player uses their (less accurate) perception of the payoff function to reason about the (more accurate) possible perceptions of the higher-capability opponent. This response function characterizes, for each possible perception of the higher-capability player (from the perspective of the lower-capability player), the best response of the higher capability player for that perception. Since the domain of the response function can be exponentially large or even infinite, finding one equilibrium may be computationally intractable or even undecidable. Nevertheless, we show that for any \$\epsilon\$, there exists an \$\epsilon\$-equilibrium with a compact, tractable representation whose size is independent of the size of the response function's domain. We further identify classes of zero-sum limited-perception games in which finding an equilibrium becomes a (typically tractable) nonsmooth convex optimization problem.
Copy
@misc{jia2024limited,
  title={Limited-perception games},
  author={Kai Jia and Martin Rinard},
  year={2024},
  eprint={2405.16735},
  archivePrefix={arXiv},
  primaryClass={cs.GT},
  url={https://arxiv.org/abs/2405.16735}
}
On the Impact of Player Capability on Congestion Games
Yichen Yang *, Kai Jia *, Martin Rinard (*: Equal contributions)
TL;DR We study a new kind of congestion games where players have limited strategy capability and analyze the impact of player capability on social welfare.
We study the impact of player capability on social welfare in congestion games. We introduce a new game, the Distance-bounded Network Congestion game (DNC), as the basis of our study. DNC is a symmetric network congestion game with a bound on the number of edges each player can use. We show that DNC is PLS-complete in contrast to standard symmetric network congestion games which are in P. To model different player capabilities, we propose using programs in a Domain-Specific Language (DSL) to compactly represent player strategies. We define a player's capability as the maximum size of the programs they can use. We introduce two variants of DNC with accompanying DSLs representing the strategy spaces. We propose four capability preference properties to characterize the impact of player capability on social welfare at equilibrium. We then establish necessary and sufficient conditions for the four properties in the context of our DNC variants. Finally, we study a specific game where we derive exact expressions of the social welfare in terms of the capability bound. This provides examples where the social welfare at equilibrium increases, stays the same, or decreases as players become more capable.
Copy
@inproceedings{yang2022on,
  author="Yang, Yichen and Jia, Kai and Rinard, Martin",
  editor="Kanellopoulos, Panagiotis and Kyropoulou, Maria and Voudouris, Alexandros",
  title="On the Impact of Player Capability on Congestion Games",
  booktitle="Algorithmic Game Theory",
  year="2022",
  publisher="Springer International Publishing",
  address="Cham",
  pages="311--328",
  isbn="978-3-031-15714-1"
}
Neural Network Verification
Exploiting Verified Neural Networks via Floating Point Numerical Error
Kai Jia, Martin Rinard
TL;DR Complete verifiers (and many incomplete verifiers) for neural networks ignore floating point error to achieve practical speed. We show that this leads to wrong verification by presenting adversarial examples which are claimed to be nonexistent by verifiers.

Motivated by the need to reliably characterize the robustness of deep neural networks, researchers have developed verification algorithms for deep neural networks. Given a neural network, the verifiers aim to answer whether certain properties are guaranteed with respect to all inputs in a space. However, little attention has been paid to floating point numerical error in neural network verification.

We show that the negligence of floating point error is easily exploitable in practice. For a pretrained neural network, we present a method that efficiently searches inputs regarding which a complete verifier incorrectly claims the network is robust. We also present a method to construct neural network architectures and weights that induce wrong results of an incomplete verifier. Our results highlight that, to achieve practically reliable verification of neural networks, any verification system must accurately (or conservatively) model the effects of any floating point computations in the network inference or verification system.

Copy
@inproceedings{jia2021exploiting,
  author="Jia, Kai and Rinard, Martin",
  editor="Dr{\u{a}}goi, Cezara and Mukherjee, Suvam and Namjoshi, Kedar",
  title="Exploiting Verified Neural Networks via Floating Point Numerical Error",
  booktitle="Static Analysis",
  year="2021",
  publisher="Springer International Publishing",
  address="Cham",
  pages="191--205",
  isbn="978-3-030-88806-0"
}
Verifying Low-dimensional Input Neural Networks via Input Quantization
Kai Jia, Martin Rinard
TL;DR Enumerating input states in the discretized input space is more efficient and robust for verifying low-dimensional input neural networks than other sophisticated verifiers.

Deep neural networks are an attractive tool for compressing the control policy lookup tables in systems such as the Airborne Collision Avoidance System (ACAS). It is vital to ensure the safety of such neural controllers via verification techniques. The problem of analyzing ACAS Xu networks has motivated many successful neural network verifiers. These verifiers typically analyze the internal computation of neural networks to decide whether a property regarding the input/output holds. The intrinsic complexity of neural network computation renders such verifiers slow to run and vulnerable to floating-point error.

This paper revisits the original problem of verifying ACAS Xu networks. The networks take low-dimensional sensory inputs with training data extracted from a lookup table. We propose to prepend an input quantization layer to the network. Quantization allows efficient verification via input state enumeration, whose complexity is bounded by the size of the quantization space. Quantization is equivalent to nearest-neighbor interpolation at run time, which has been shown to provide acceptable accuracy for ACAS in simulation. Moreover, our technique can deliver exact verification results immune to floating-point error if we directly enumerate the network outputs on the target inference implementation or on an accurate simulation of the target implementation.

Copy
@inproceedings{jia2021verifying,
  author="Jia, Kai and Rinard, Martin",
  editor="Dr{\u{a}}goi, Cezara and Mukherjee, Suvam and Namjoshi, Kedar",
  title="Verifying Low-Dimensional Input Neural Networks via Input Quantization",
  booktitle="Static Analysis",
  year="2021",
  publisher="Springer International Publishing",
  address="Cham",
  pages="206--214",
  isbn="978-3-030-88806-0"
}
Efficient Exact Verification of Binarized Neural Networks
Kai Jia, Martin Rinard
TL;DR Efficient (thousands of times faster) verification of BNNs via improved SAT solving and training methods; methods for training robust BNNs.
Concerned with the reliability of neural networks, researchers have developed verification techniques to prove their robustness. Most verifiers work with real-valued networks. Unfortunately, the exact (complete and sound) verifiers face scalability challenges and provide no correctness guarantees due to floating point errors. We argue that Binarized Neural Networks (BNNs) provide comparable robustness and allow exact and significantly more efficient verification. We present a new system, EEV, for efficient and exact verification of BNNs. EEV consists of two parts: (i) a novel SAT solver that speeds up BNN verification by natively handling the reified cardinality constraints arising in BNN encodings; and (ii) strategies to train solver-friendly robust BNNs by inducing balanced layer-wise sparsity and low cardinality bounds, and adaptively cancelling the gradients. We demonstrate the effectiveness of EEV by presenting the first exact verification results for L-inf-bounded adversarial robustness of nontrivial convolutional BNNs on the MNIST and CIFAR10 datasets. Compared to exact verification of real-valued networks of the same architectures on the same tasks, EEV verifies BNNs hundreds to thousands of times faster, while delivering comparable verifiable accuracy in most cases.
Copy
@inproceedings{jia2020efficient,
 author = {Jia, Kai and Rinard, Martin},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},
 pages = {1782--1795},
 publisher = {Curran Associates, Inc.},
 title = {Efficient Exact Verification of Binarized Neural Networks},
 url = {https://proceedings.neurips.cc/paper/2020/file/1385974ed5904a438616ff7bdb3f7439-Paper.pdf},
 volume = {33},
 year = {2020}
}
Numerical Optimization
TRAFS: A Nonsmooth Convex Optimization Algorithm with \$\mathcal{O}\left(\frac{1}{\epsilon}\right)\$ Iteration Complexity
Kai Jia, Martin Rinard
TL;DR We propose a new nonsmooth convex optimization algorithm with iteration complexity \$\mathcal{O}\left(\frac{1}{\epsilon}\right)\$. The caveat is that the algorithm requires solving a minimax problem at each iteration, which can be easy or expensive depending on the objective function. Still, TRAFS achieves competitive performance on benchmark problems.
We present the Trust Region Adversarial Functional Subdifferential (TRAFS) algorithm for constrained optimization of nonsmooth convex Lipschitz functions. Unlike previous methods that assume a subgradient oracle model, we work with the functional subdifferential defined as a set of subgradients that simultaneously captures sufficient local information for effective minimization while being easy to compute for a wide range of functions. In each iteration, TRAFS finds the best step vector in an $\ell_2$-bounded trust region by considering the worst bound given by the functional subdifferential. TRAFS finds an approximate solution with an absolute error up to \$\epsilon\$ in \$\mathcal{O}\left( \epsilon^{-1}\right)\$ or \$\mathcal{O} \left(\epsilon^{-0.5} \right)\$ iterations depending on whether the objective function is strongly convex, compared to the previously best-known bounds of \$\mathcal{O}\left(\epsilon^{-2}\right)\$ and \$\mathcal{O}\left(\epsilon^{-1}\right)\$ in these settings. TRAFS makes faster progress if the functional subdifferential satisfies a locally quadratic property; as a corollary, TRAFS achieves linear convergence (i.e., \$\mathcal{O}\left(\log \epsilon^{-1}\right)\$) for strongly convex smooth functions. In the numerical experiments, TRAFS is on average 39.1x faster and solves twice as many problems compared to the second-best method.
Copy
@misc{jia2024trafs,
  title={TRAFS: A Nonsmooth Convex Optimization Algorithm with $\mathcal{O}\left(\frac{1}{\epsilon}\right)$ Iteration Complexity},
  author={Kai Jia and Martin Rinard},
  year={2024},
  eprint={2311.06205},
  archivePrefix={arXiv},
  primaryClass={math.OC},
  url={https://arxiv.org/abs/2311.06205}
}
SANM: A Symbolic Asymptotic Numerical Solver with Applications in Mesh Deformation
Kai Jia
TL;DR We present a system to automatically solve high-dimensional nonlinear equations via higher-order approximation. This paper also contains a short introduction to the very elegant and interesting but lesser-known Asymptotic Numerical Method. This work is an extension of my course project for 6.839 Advanced Computer Graphics.

Solving nonlinear systems is an important problem. Numerical continuation methods efficiently solve certain nonlinear systems. The Asymptotic Numerical Method (ANM) is a powerful continuation method that usually converges faster than Newtonian methods. ANM explores the landscape of the function by following a parameterized solution curve approximated with a high-order power series. Although ANM has successfully solved a few graphics and engineering problems, prior to our work, applying ANM to new problems required significant effort because the standard ANM assumes quadratic functions, while manually deriving the power series expansion for nonquadratic systems is a tedious and challenging task.

This paper presents a novel solver, SANM, that applies ANM to solve symbolically represented nonlinear systems. SANM solves such systems in a fully automated manner. SANM also extends ANM to support many nonquadratic operators, including intricate ones such as singular value decomposition. Furthermore, SANM generalizes ANM to support the implicit homotopy form. Moreover, SANM achieves high computing performance via optimized system design and implementation.

We deploy SANM to solve forward and inverse elastic force equilibrium problems and controlled mesh deformation problems with a few constitutive models. Our results show that SANM converges faster than Newtonian solvers, requires little programming effort for new problems, and delivers comparable or better performance than a hand-coded, specialized ANM solver. While we demonstrate on mesh deformation problems, SANM is generic and potentially applicable to many tasks.

Copy
@article{jia2021sanm,
  title={{SANM}: A Symbolic Asymptotic Numerical Solver with Applications in Mesh Deformation},
  author={Jia, Kai},
  journal={{ACM} Transactions on Graphics (Proc. {SIGGRAPH})},
  publisher={ACM},
  year={2021},
  volume={40},
  number={4}
}
Machine Learning Applications and Tricks
A Pancreatic Cancer Risk Prediction Model (Prism) Developed and Validated on Large-scale US Clinical Data
Kai Jia, Steven Kundrotb, Matvey B. Palchukb, Jeff Warnickb, Kathryn Haapalab, Irving D. Kaplanc, Martin Rinard, Limor Appelbaum
TL;DR We developed a pancreatic cancer (Pancreatic Duct Adenocarcinoma, PDAC) risk prediction model that predicts a patient's PDAC risk within 6 to 18 months in the future. The model is based on routinely collected health records, which makes future clinical deployment highly feasible.

Background
Pancreatic Duct Adenocarcinoma (PDAC) screening can enable early-stage disease detection and long-term survival. Current guidelines use inherited predisposition, with about 10% of PDAC cases eligible for screening. Using Electronic Health Record (EHR) data from a multi-institutional federated network, we developed and validated a PDAC RISk Model (Prism) for the general US population to extend early PDAC detection.

Methods
Neural Network (PrismNN) and Logistic Regression (PrismLR) were developed using EHR data from 55 US Health Care Organisations (HCOs) to predict PDAC risk 6–18 months before diagnosis for patients 40 years or older. Model performance was assessed using Area Under the Curve (AUC) and calibration plots. Models were internal-externally validated by geographic location, race, and time. Simulated model deployment evaluated Standardised Incidence Ratio (SIR) and other metrics.

Findings
With 35,387 PDAC cases, 1,500,081 controls, and 87 features per patient, PrismNN obtained a test AUC of 0.826 (95% CI: 0.824–0.828) (PrismLR: 0.800 (95% CI: 0.798–0.802)). PrismNN's average internal-external validation AUCs were 0.740 for locations, 0.828 for races, and 0.789 (95% CI: 0.762–0.816) for time. At SIR = 5.10 (exceeding the current screening inclusion threshold) in simulated model deployment, PrismNN sensitivity was 35.9% (specificity 95.3%).

Interpretation
Prism models demonstrated good accuracy and generalizability across diverse populations. PrismNN could find 3.5 times more cases at comparable risk than current screening guidelines. The small number of features provided a basis for model interpretation. Integration with the federated network provided data from a large, heterogeneous patient population and a pathway to future clinical deployment.

Funding
Prevent Cancer Foundation, TriNetX, Boeing, DARPA, NSF, and Aarno Labs.

Copy
@article{jia2023pancreatic,
  title={A pancreatic cancer risk prediction model (Prism) developed and validated on large-scale US clinical data},
  author={Jia, Kai and Kundrot, Steven and Palchuk, Matvey B and Warnick, Jeff and Haapala, Kathryn and Kaplan, Irving D and Rinard, Martin and Appelbaum, Limor},
  journal={EBioMedicine},
  volume={98},
  year={2023},
  publisher={Elsevier}
}
Effective Neural Network \$\ell_0\$ Regularization With BinMask
Kai Jia, Martin Rinard
TL;DR BinMask (i.e., multiplying weights by binary masks with straight-through estimator for BP) appears to be an effective method (in terms of ease of training, few hyperparameters, and final model performance) for \$\ell_0\$ regularization in neural networks. Of note, BinMask was also used for training sparse binarized neural networks for fast verification.
\$\ell_0\$ regularization of neural networks is a fundamental problem. In addition to regularizing models for better generalizability, \$\ell_0\$ regularization also applies to selecting input features and training sparse neural networks. There is a large body of research on related topics, some with quite complicated methods. In this paper, we show that a straightforward formulation, BinMask, which multiplies weights with deterministic binary masks and uses the identity straight-through estimator for backpropagation, is an effective \$\ell_0\$ regularizer. We evaluate BinMask on three tasks: feature selection, network sparsification, and model regularization. Despite its simplicity, BinMask achieves competitive performance on all the benchmarks without task-specific tuning compared to methods designed for each task. Our results suggest that decoupling weights from mask optimization, which has been widely adopted by previous work, is a key component for effective \$\ell_0\$ regularization.
Copy
@misc{jia2023effective,
  title={Effective Neural Network $L_0$ Regularization With BinMask},
  author={Kai Jia and Martin Rinard},
  year={2023},
  eprint={2304.11237},
  archivePrefix={arXiv},
  primaryClass={cs.LG},
  url={https://arxiv.org/abs/2304.11237}
}
Sound Explanation for Trustworthy Machine Learning
Kai Jia, Pasapol Saowakon, Limor Appelbaum, Martin Rinard
TL;DR For attribution-based model interpretation methods, we propose four desirable properties and prove that no method satisfies all four. We advocate sound explanation, which needs to provide sufficient information to causally explain the model's decision.
We take a formal approach to the explainability problem of machine learning systems. We argue against the practice of interpreting black-box models via attributing scores to input components due to inherently conflicting goals of attribution-based interpretation. We prove that no attribution algorithm satisfies specificity, additivity, completeness, and baseline invariance. We then formalize the concept, sound explanation, that has been informally adopted in prior work. A sound explanation entails providing sufficient information to causally explain the predictions made by a system. Finally, we present the application of feature selection as a sound explanation for cancer prediction models to cultivate trust among clinicians.
Copy
@misc{jia2023sound,
  title={Sound Explanation for Trustworthy Machine Learning},
  author={Kai Jia and Pasapol Saowakon and Limor Appelbaum and Martin Rinard},
  year={2023},
  eprint={2306.06134},
  archivePrefix={arXiv},
  primaryClass={cs.LG},
  url={https://arxiv.org/abs/2306.06134}
}
Posts