ECSE 4500/6500 Optimization and Learning in Distributed Systemspreviously "Distributed Systems and Sensor Networks'', Tianyi Chen, Spring 2021
Course InformationMeeting Times: Mon Thu: 10:10 AM - 11:30 AM Course DescriptionThis course is neither about hardware implementation nor design of distributed systems. This is a course about algorithms and simulations of parallel and distributed optimization. The course covers parallel and distributed optimization algorithms and their analyses that are suitable for large-scale and distributed problems arising in machine learning and signal processing. PrerequisitesThis course is intended for graduate students and qualified undergraduate students with a strong mathematical and programming background. Undergraduate level coursework in linear algebra, calculus, probability, and statistics is suggested. A background in programming (e.g., Python and Matlab) is also necessary for the problem sets. A background in optimization and machine learning is also preferred. At RPI, the required courses are MATH 2010 and ESCE 2500, and the suggested courses are MATP4820 and ECSE 4840, or permission by instructor. Student Learning OutcomesAfter taking the course, undergraduate students are expected to know:
2. how to implement numerically stable algorithms to solve real-world engineering problems in distributed systems; 3. how to qualitatively evaluate the efficiency of a distributed algorithm for various applications in distributed systems. In addition to the above, graduate students are also expected to know:
2. how to design and modify numerically stable algorithms to solve real-world engineering problems for various applications in distributed systems. Grading Criteria Homework assignments: total 6, 60%
ECSE 6500 - Individual project. It can be either theoretic or experimental, with approval from the instructor. You are encouraged to combine your current research with your term project. Optional References1. Dimitri P. Bertsekas and John N. Tsitsiklis, "Parallel and Distributed Computation: Numerical Methods", Athena Scientific, 2015; 2. Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato and Jonathan Eckstein, “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers,” Foundation and Trends Machine Learning, 2011; 3. Guanghui Lan, “Lectures on Optimization Methods for Machine Learning,” preprint, 2019; 4. Ernest Ryu and Wotao Yin, “A First Course in Large-Scale Optimization,” preprint, 2020. Course Content 1. Optimization basics and complexity measures
b) Distributed/local stochastic gradient descent c) Distributed variance reduced stochastic gradient
b) Decentralized ADMM c) Decentralized stochastic gradient descent 9. Decentralized algorithms with time-varying topology 10. Applications in machine learning, signal processing and control
b) Distributed reinforcement learning c) Distributed power system state estimation d) Distributed parameter estimation in sensor networks |