In the previous post in our series on the Bregman algorithm we discussed how to solve convex optimization problems. In this post we want to give reference to some variations and extensions of the Bregman algorithm.

First of all, there’s a website related to the Split Bregman Algorithm, which we analysed here. The website provides further documentation and reference implementations in Matlab.

There is also a nice paper [1] that discusses its relationship to other frameworks in convex optimisation. Finding relationships between optimization algorithms is a very interesting research topic but out of scope for blog posts.

A rather popular variation of the Bregman Algorithm is the linearized formulation [2]. Here the constraint is approximated by its first order Taylor approximation. This can significantly simplify the iterative computations. However, it also changes the original optimization task and the algorithm might not always converge towards the sought solution. One can show however that it converges to a solution which is close [3].

## References

[1] Setzer S. “Split Bregman Algorithm, Douglas-Rachford Splitting and Frame Shrinkage.” In: Tai XC., Mørken K., Lysaker M., Lie KA. (eds) Scale Space and Variational Methods in Computer Vision. SSVM 2009. Lecture Notes in Computer Science, vol 5567. Springer, Berlin, Heidelberg. Preprint available here.

[2] Cai, Jian-Feng and Osher, Stanley J. and Shen, Zuowei. “Linearized Bregman iterations for compressed sensing”, Mathematics of Computation 2009, 78(267), 1515–1536

[3] Cai, Jian-Feng and Osher, Stanley J. and Shen, Zuowei. “Convergence of the Linearized Bregman Iteration for $\ell_1$-norm Minimization”, Mathematics of Computation, 2009, 78(268), 2127–2136