[1] |
Karch T, Teodorescu L, Hofmann K, et al. Grounding spatio-temporal language with transformers[J]. Advances in Neural Information Processing Systems, 2021, 34: 5236-5249.
|
[2] |
Wang J, Wang K C, Rudzicz F, et al. Grad2Task: Improved few-shot text classification using gradients for task representation[J]. Advances in Neural Information Processing Systems, 2021, 34: 6542-6554.
|
[3] |
Dahnert M, Hou J, Nießner M, et al. Panoptic 3D scene reconstruction from a single RGB image[J]. Advances in Neural Information Processing Systems, 2021, 34: 8282-8293.
|
[4] |
Tian Y, Yang W, Wang J. Image fusion using a multi-level image decomposition and fusion method[J]. Applied Optics, 2021, 60(24): 7466-7479.
DOI
URL
|
[5] |
Geiger A, Lenz P, Urtasun R. Are we ready for autonomous driving? The kitti vision benchmark suite[C]// Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2012: 3354-3361.
|
[6] |
Zheng X, Zhu J. Efficient LiDAR odometry for autonomous driving[J]. IEEE Robotics and Automation Letters, 2021, 6(4): 8458-8465.
DOI
URL
|
[7] |
中国计算机学会. CCF 2019—2020 中国计算机科学技术发展报告[M]. 北京: 机械工业出版社, 2020.
|
[8] |
Liu W W, Song F, Zhang T H R, et al. Verifying ReLU neural networks from a model checking perspective[J]. Journal of Computer Science and Technology, 2020, 35(6): 1365-1381.
DOI
|
[9] |
Liang Z, Ren D, Liu W, et al. Safety verification for neural networks based on set-boundary analysis[DB/OL]. arXiv preprint: 2210.04175, 2022.
|
[10] |
Gehr T, Mirman M, Drachsler-Cohen D, et al. AI2: Safety and robustness certification of neural networks with abstract interpretation[C]// Proceedings of the 2018 IEEE Symposium on Security and Privacy. Piscataway: IEEE Press, 2018: 3-18.
|
[11] |
Katz G, Barrett C, Dill D L, et al. Reluplex: An efficient SMT solver for verifying deep neural networks[C]// Majumdar R, Kunčak V.Proceedings of the 29th International Conference on Computer Aided Verification. Cham: Springer, 2017: 97-117.
|
[12] |
邱锡鹏. 神经网络与深度学习[M]. 北京: 机械工业出版社, 2020.
|
[13] |
Casadio M, Komendantskaya E, Daggitt M L, et al. Neural network robustness as a verification property: A principled case study[C]// Shoham S, Vizel Y. Proceedings of the 34th International Conference on Computer Aided Verification. Cham: Springer, 2022: 219-231.
|
[14] |
Ehlers R. Formal verification of piece-wise linear feed-forward neural networks[C]// D’Souza D, Kumar K N. Proceedings of the 15th International Symposium on Automated Technology for Verification and Analysis. Cham: Springer, 2017: 269-286.
|
[15] |
Lomuscio A, Maganti L. An approach to reachability analysis for feed-forward ReLU neural networks[DB/OL]. arXiv preprint: 1706.07351, 2017.
|
[16] |
Singh G, Gehr T, Mirman M, et al. Fast and effective robustness certification[J]. Advances in Neural Information Processing Systems, 2018, 31: 10825-10836.
|
[17] |
Yang X, Yamaguchi T, Tran H D, et al. Neural network repair with reachability analysis[C]// Bogomolov S, Parker D. Proceedings of the 20th International Conference on Formal Modeling and Analysis of Timed Systems. Cham: Springer, 2022: 221-236.
|
[18] |
Usman M, Gopinath D, Sun Y, et al. NN repair: Constraint-based repair of neural network classifiers[C]// Silva A, Leino K R M.Proceedings of the 33rd International Conference on Computer Aided Verification. Cham: Springer, 2021: 3-25.
|
[19] |
Sun B, Sun J, Pham L H, et al. Causality-based neural network repair[C]// Proceedings of the 2022 IEEE/ACM 44th International Conference on Software Engineering. Piscataway: IEEE Press, 2022: 338-349.
|
[20] |
Shorten C, Khoshgoftaar T M. A survey on image data augmentation for deep learning[J]. Journal of Big Data, 2019, 6(1): 1-48.
DOI
|
[21] |
Goodfellow I J, Shlens J, Szegedy C. Explaining and harnessing adversarial examples[DB/OL]. arXiv preprint: 1412.6572, 2014.
|
[22] |
Madry A, Makelov A, Schmidt L, et al. Towards deep learning models resistant to adversarial attacks[DB/OL]. arXiv preprint: 1706.06083, 2017.
|
[23] |
Kurakin A, Goodfellow I, Bengio S. Adversarial machine learning at scale[DB/OL]. arXiv preprint: 1611.01236, 2016.
|
[24] |
Tsipras D, Santurkar S, Engstrom L, et al. Robustness may be at odds with accuracy[DB/OL]. arXiv preprint: 1805.12152, 2018.
|
[25] |
Zhang H, Yu Y, Jiao J, et al. Theoretically principled trade-off between robustness and accuracy[C]// Proceedings of the 36th International Conference on Machine Learning. New York: PMLR, 2019: 7472-7482.
|
[26] |
Mirman M, Gehr T, Vechev M. Differentiable abstract interpretation for provably robust neural networks[C]// Proceedings of the 35th International Conference on Machine Learning. New York: PMLR, 2018: 3578-3586.
|
[27] |
Gowal S, Dvijotham K, Stanforth R, et al. On the effectiveness of interval bound propagation for training verifiably robust models[DB/OL]. arXiv preprint: 1810.12715, 2018.
|
[28] |
Zhang H, Chen H, Xiao C, et al. Towards stable and efficient training of verifiably robust neural networks[DB/OL]. arXiv preprint: 1906.06316, 2019.
|
[29] |
Fazlyab M, Robey A, Hassani H, et al. Efficient and accurate estimation of lipschitz constants for deep neural networks[J]. Advances in Neural Information Processing Systems, 2019, 32: 11423-11434.
|
[30] |
Pauli P, Koch A, Berberich J, et al. Training robust neural networks using Lipschitz bounds[J]. IEEE Control Systems Letters, 2021, 6: 121-126.
DOI
URL
|
[31] |
Gouk H, Frank E, Pfahringer B, et al. Regularisation of neural networks by enforcing Lipschitz continuity[J]. Machine Learning, 2021, 110(2): 393-416.
DOI
|
[32] |
Leino K, Wang Z, Fredrikson M. Globally-robust neural networks[C]// Proceedings of the 38th International Conference on Machine Learning. New York: PMLR, 2021: 6212-6222.
|
[33] |
Singh G, Gehr T, Püschel M, et al. An abstract domain for certifying neural networks[J]. Proceedings of the ACM on Programming Languages, 2019, 3(POPL): 1-30.
|
[34] |
Balunovic M, Baader M, Singh G, et al. Certifying geometric robustness of neural networks[J]. Advances in Neural Information Processing Systems, 2019, 32: 15287-15297.
|
[35] |
Su D, Zhang H, Chen H, et al. Is robustness the cost of accuracy? A comprehensive study on the robustness of 18 deep image classification models[C]// Farrari V, Hebert M, Sminchisescu C, et al.Proceedings of the 15th European Conference on Computer Vision. Cham: Springer, 2018: 631-648.
|
[36] |
Xie C, Yuille A, Intriguing properties of adversarial training at scale[DB/OL]. arXiv preprint: 1906.03787, 2019.
|
[37] |
Guo M, Yang Y, Xu R, et al. When NAS meets robustness: In search of robust architectures against adversarial attacks[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2020: 631-640.
|
[38] |
Bender G, Kindermans P J, Zoph B, et al. Understanding and simplifying one-shot architecture search[C]// Proceedings of the 35th International Conference on Machine Learning. New York: PMLR, 2018: 550-559.
|
[39] |
Cai H, Gan C, Wang T, et al. Once-for-all: Train one network and specialize it for efficient deployment[DB/OL]. arXiv preprint: 1908.09791, 2019.
|
[40] |
Xiao K Y, Tjeng V, Shafiullah N M, et al. Training for faster adversarial robustness verification via inducing ReLU stability[DB/OL]. arXiv preprint: 1809.03008, 2018.
|
[41] |
Dvijotham K, Gowal S, Stanforth R, et al. Training verified learners with learned verifiers[DB/OL]. arXiv preprint: 1805.10265, 2018.
|