Developers insert monitoring probes (i.e., logging statements) in their source code to monitor the runtime behaviors of software systems. Logging statements print runtime log messages which play a critical role in various operation and maintenance efforts (e.g., anomaly detection or failure diagnosis). However, developers typically insert logging statements in an ad hoc manner, often resulting in fragile logging code, i.e., insufficient logging in some code snippets and excessive logging in other code snippets. Insufficient logging can significantly increase the difficulty of field failure diagnosis, while excessive logging can cause performance overhead and hide truly important information. To understand and support software logging practices, we surveyed software developers and analyzed software repositories (source code, change history, and issue reports) to understand the benefits and costs of logging, where developers distribute their logging code, how they choose the verbosity level and content for their logging code, how they maintain their logging code, and the consistency between logging and other code, and proposed automated approaches to support developers’ logging practices (e.g., auto-generation of logging code). In this talk, I will discuss lessons learned from traditional software monitoring, how they apply to machine learning applications, and the particular considerations for the monitoring of machine learning applications.