Recipes for Building the Dual of Conic Optimization Problem

Building the dual of the primal problem of Conic Optimization (CO) isa very important step to make the ¯nding optimal solution. In many cases a givenproblem does not have the simple structure of CO problem (i.e., minimizing a linearfunction over an intersection between a±ne space and convex cones) but there areseveral conic constraints and sometimes also equality constraints. In this paper wedeal with the question how to form the dual problem in such cases. We discuss theanswer by considering several conic constraints with or without equality constraints.The recipes for building the dual of such cases is formed in standard matrix forms,such that it can be used easily on the numerical experiment. Special attention isgiven to dual development of special classes of CO problems, i.e., conic quadraticand semide¯nite problems. In this paper, we also brie°y present some preliminariestheory on CO as an introduction to the main topic.DOI : http://dx.doi.org/10.22342/jims.16.1.28.9-23


Introduction
Conic optimization (CO) is a very useful optimization technique that concerns the problem of minimizing a linear objective function over the intersection of an affine set and a convex cone. The general form of a conic optimization problem is as follows: The objective function is c T x, with objective vector c ∈ R n . Furthermore, Ax − b represents an affine function from R n to R m , K denotes a convex cone in R m and the constraint matrix A is of size m × n. The importance of this class of problems is due to two facts, i.e., many practical nonlinear problems can be modelled as a CO problem, and a wide class of CO problems can be solved efficiently by so-called interior-point methods.
The interest in CO was highly stimulated when it became clear that the interior-point methods that were developed in the two last decades for LO (see, e.g.,Hertog [6], Jansen [7], Karmakar [9], Roos et al. [15], Terlaky [18], Wright [19], Ye [20]), and which revolutionized the field of LO, could be naturally extended to obtain polynomial-time methods for CO. The most elegant theory developed by Nesterov and Nemirovskii [11] provides an interior-point method with polynomial complexity if the underlying cone has a so-called self-concordant barrier that is computationally tractable. This opened the way to a wide spectrum of new applications which cannot be captured by LO, e.g., in image processing, finance, economics, control theory, combinatorial optimization, etc. For a nice survey both of the theory of CO and many new applications, we refer to the book of Ben-Tal and Nemirovskii [2]. In this paper we not touch the algorithmic aspects of interior-point methods for CO. We refer the interested reader to the existing literature, where one can find a wide variety of such methods. See, e.g., the above references and also Boyd [4], Jarre [8], deKlerk [5], Peng [12], Renegar [13]. Numerical evidence for the efficiency of these methods has been provided by many authors (e.g. Andersen [1], Mehrotra [10], Strum [16,17]).
The easiest and most well known case occurs when the cone K is the nonnegative orthant of R m , i.e., when K = R m + . Then the above problem gets the form This is nothing but one of the standard forms of the well-known Linear Optimization (LO) problem. Thus it becomes clear that LO is a special case of CO. The organization of this paper is presented as follows. In Section 2 we recall some basic concepts such as affine sets, convex sets and convex cones. The main theoretical results for conic optimization are presented in Section 2.3. This section includes the conic duality theorem and recipes for building the dual problem. We discuss as special cases the conic quadratic problem (CQP) in Section 3.3 and the semidefinite problem (SDP) in Section 3.4. The conic duality for each discussed cases can be read in Section 4.

Preliminaries
This section contains some basic concepts that are used in conic optimization. We first recall some well-known facts on affine sets, convex sets and convex cones. Sources for this section are Ben-Tal and Nemirovskii [2], Boyd [3] and Rockafellar [14].
2.1. Affine and Convex Sets. If x and y are different points in R m , the set of points is the line through x and y. A subset M of R m is called an affine set if ℓ(x, y) ⊆ M for all x, y ∈ M. Geometrically, an affine set is simply a translation of a linear subspace L of R m . As a consequence, any affine set can be represented either as where b is an arbitrary point of M, or as, where the kernel of the matrix B is L and Bb = d.
A set C ⊆ R m is a convex set if it contains the line segment joining x and y, for all x, y ∈ C. All affine sets are convex since l[x, y] ⊆ l(x, y) for any two points x, y ∈ R m . It is well known, and may be easily verified, that the intersection of an arbitrary collection of convex sets is convex.

Convex Cones.
A subset K of R m is called a cone if it is closed under multiplication with nonnegative scalars: Thus a cone is the union of half lines emanating from the origin. A cone is convex if and only if a, a ′ ∈ K ⇒ a + a ′ ∈ K, which is easy to verify. So, a subset K of R m is a convex cone if and only if it is closed under addition and nonnegative scalar multiplication. As an easy consequence we mention that the intersection of an arbitrary collection of convex cones is a convex cone.
In this paper we are interested in convex cones that are pointed, closed and that have nonempty interior. A convex cone K is called pointed if it does not contain a line. This property can be stated equivalently as a ∈ K, −a ∈ K ⇒ a = 0. (3) A convex cone K is called closed if it is closed under taking limits: A cone has a nonempty interior if there exist a vector such that a ball with positive radius centered at the vector is contained in the cone. The set of all such vectors is called the interior of the cone. Denoting the interior of a cone K as int K, we require that int K = ∅.
If a cone K is pointed, closed and solid (i.e., has nonempty interior), we call K a proper cone.
In CO we only deal with cones K that enjoy all of the above properties. So we always assume that K is a proper cone.
This cone is also called the second-order cone, or the ice-cream cone, or the quadratic cone.
We assume that the cone K in (CP ) is a direct product of the form where each component K i is either a linear, a Lorentz or a semidefinite cone. Any such direct product is itself a proper cone.
In the next section, we discuss the main theoretical result for conic optimization, i.e., the conic duality theorem.
2.3. Conic Duality. Before we derive the duality theory for CO, we need to define the dual cone K * of a convex cone K: The following theorem (see, e.g., [2]) implies among other things that the dual cone is always a closed convex cone.
An immediate corollary of the theorem is as follows.
Corollary 2.2. If K ⊆ R m is a proper cone then so is K * , and vice versa.
By Theorem 2.1, self-dual cones are closed and convex. The dual of a direct product of convex cones is the direct product of their duals, i.e., One may easily verify that the three cones introduced in Section 2.2 are self-dual. As a consequence, any direct product of linear, Lorentz and semidefinite cones is self-dual. Now we are ready to deal with the problem dual to a conic problem (CP ). We start with the observation that whenever x is a feasible solution of (CP ) and y ∈ K * then x satisfies the scalar inequality It follows that whenever y satisfies the relation then for all x feasible in (CP ) one has This implies that b T y is a lower bound on the optimal value of (CP ). The best bound one can get in this way is the optimal value of the problem This problem is called the dual problem of (CP ). Thus we arrive at the following conic weak duality theorem.
Theorem 2.3. Let x be a feasible solution of (CP ) and y a feasible solution of (CD).
After the weak duality theorem, the crucial question is if we have equality of the optimal values whenever (CP ) and (CD) have optimal values. The theorem that we present below clarifies the situation.
Before we state the theorem, for convenience let us denote the optimal objective value of problems (CP ) and (CD) by p * and d * . We will say that the primal (dual) problem is unbounded if p * = −∞ (d * = +∞) and that it is infeasible if there is no feasible solution, in that case we define p * = +∞ (d * = −∞). We emphasize the fact that although our cone K is closed, it may happen that the infimum in (CP ) or the supremum in (CD) is not attained, even if it is finite, because there may exist a sequence of feasible points whose objectives values tend to p * (d * ) but whose limit is not feasible.
We say that the primal (dual) problem is solvable if the optimal objective value p * (d * ) is attained by a primal (dual) feasible solution. We need one more definition: if there exists an x such that Ax − b ∈ int K then we say that (CP ) is strictly feasible. Similarly, (CD) is strictly feasible if there exist a feasible y with y ∈ int K * . Now, we are ready to state the strong conic duality theorem (see Ben-Tal and Nemirovskii [2]).
Theorem 2.4. Let the primal problem (CP ) and its dual problem (CD) be as given above. Then one has the following.
(1) The duality is symmetric : the dual problem is conic, and the problem dual to the dual problem is (equivalent to) the primal problem. An important consequence of the conic duality theorem is the following. Thus, now we ready to discuss the main topic of the paper, i.e., the recipes for building the dual problem for CO.

Recipes for Building The Dual Problem
In many cases a given problem does not have the simple structure of (CP ), but there are several conic constraints and sometimes also equality constraints, like in the problem where K i are different cones. In this section we deal with the question how to form the dual problem in such cases. We discuss the answer by considering two cases as follows.
3.1. Building The Dual of A Conic Problem without Equality Constraints.
In this subsection, we discuss the case where there are no equality constraints in (8). Then (8) gets the form In that case the constraints can be taken together in one conic constraint, namely With A = (A 1 ; . . . ; A m ) and b = (b 1 ; . . . ; b m ), we get the form of (CP). By writing the dual variable as y = (y 1 ; . . . ; y m ) with y i ∈ K i * , we have A T y = A T 1 y 1 + . . . + A T m y m and hence the dual problem is given by

Building The Dual of A Conic Problem with Equality Constraints.
The second case is when (8) has equality constraints. Because of the above observation we may then assume that there is only one conic constraint, as follows.
Now the question is what is the dual to (12)? An easy way to find the dual is to write the equality constraint P x = p as where q is the dimension of p. So we may rewrite (12) as Note that the cone associated with (13) is L q+1 × K which has the dual cone L q+1 × K * . Thus, the following conic problem is the dual to (13): Now, consider that (v; τ ) ∈ L q+1 is equivalent to the constraint v ≤ τ . Since τ does appear neither in the other constraints nor in the objective function, we can neglect this constraint. Therefore, problem (14) can be rewritten as follows: Here v and y are the dual variables corresponding to the constraints of the primal problem (12); the dual vector comprises a vector variable v of the same dimension as the right-hand side p of the system of primal equalities and a vector variable y of the same dimension as the primal vector inequality in (12). We summarize the recipes for building the dual in the following theorem.
Theorem 3.1. The dual problem of (8) is given by

Building The Dual of A Conic Quadratic Problem.
A conic problem (CP ) for which the cone K is a direct product of several second-order cones is called a conic quadratic problem (CQP). Such a problem has the form Omitting the subscript i, the constraints in CQP all have the form Ax−b ∈ L m , for some m ≥ 2. In this section we discuss that any such constraint can be written in a different way. We write the matrix A and the vector b in the constraint Ax−b ∈ L m as follows: where a T m is the last row of A and b m is the last entry in b. Then we have Hence the conic constraint Ax − b ∈ L m is equivalent to the constraint We call any constraint of this form a conic quadratic constraint. Thus the conic quadratic problem (CQP) can be represented as follows.
Of course, the norm appearing above is the standard Euclidean norm, i.e., u = (u T u) 1 2 .
Denoting y = (y 1 ; y 2 ; . . . ; y m ) , with y i ∈ L mi , Theorem 3.1 implies that the dual problem to (CQP) is Here we used that the product of several second-order cones is self-dual. Now we derive that the dual problem of (QP) can be expressed in terms of the data of (QP), . . , m. This is achieved by writing y i = (µ i ; τ i ), with scalar component τ i . Then we can rewrite (18) as follows: 3.4. Building The Dual of A Semidefinite Problem. A semidefinite problem (SDP) is a conic problem for which the cone K is a semidefinite cone. Such a problem has the form min We start by considering the semidefinite constraint Ax − b ∈ S m + for m ≥ 2. We show that such a constraint is equivalent to a so-called linear matrix inequality (LMI).
To show this, recall that the cone S m + consists of all positive m×m semidefinite matrices. It is assumed so far that Ax and b are vectors. This makes it necessary to explain what is the meaning of Ax − b ∈ S m + . This can be clarified as follows. By associating to every m × m symmetric matrix Z the concatenation of its columns, in their natural order, we get a vector z ∈ R m 2 . The mapping Z → z is linear and one-to-one. This mapping is denoted as vec(·) and its inverse mapping as mat(·). So we may write z = vec(Z), Z = mat(z). (19) Consider that when U is another m × m symmetric matrix and u = vec(U) then . (20) This shows that Tr(ZU ) is the natural inner product of two symmetric matrices Z and U . It is (therefore) also represented as Z, U . The corresponding norm is the well-known Frobenius norm, which satisfies Obviously, the trace function is commutative, i.e., Tr(ZU ) = Tr(U Z).
Thus, using the above notational conventions, any constraint Ax − b ∈ S m + should be understood as follows.
where A i = mat(a i ) and B = mat(b). We call any constraint of the form (21) a linear matrix inequality (LMI). Consequently, a semidefinite problem can be represented as follows.
By Theorem 3.1, the dual problem of (SDP ) is given by To write the dual problem of (SP ) in terms of matrices, we use (20). To this end we observe that A T y = c ⇔ a T i y = c i , i = 1, . . . , n. As before, let A i = mat(a i ) and Y = mat(y). Then by (20) we may write Thus, we can summarize the results by representing explicit forms of the primal and the dual for each problems, respectively in Table 1.

Conic Duality in The Discussed Cases
The idea of this section is to make an investigation for checking the necessary and sufficient conditions of Conic Duality Theorem to each cases of conic problem as we discussed in the previous section. The results are presented as follows.
4.1. Conic duality in case of a conic problem with or without equality constraints. Consider the problem (13), the weak conic duality theorem is hold since once we start with the observation that whenever x is a feasible solution of (13) and . . , m. Conic problem with a direct product of second order cones It follows that whenever (24) satisfies the relation then for all x feasible in (13) one has This implies that p T v + b T y is a lower bound on the optimal value of (13). The best bound one can get in this way is the optimal value of the problem (14). Furthermore, as stated in [2], there is a strong assumption that when speaking about (CP ), it always assume that the matrix A is of full column rank (i.e., its column are linearly indepent). This means that to the problem (13) and (14) the following conditions are hold.
(1) The rows of the matrix P in (13) are linearly independent.
(2) There is no x such that P x = 0 and A i x = 0 for i = 1, . . . , m.
The weakly duality is hold (as we discussed), i.e., the optimal value of (14) is less than or equal to the optimal value of (13). (5) The strong duality condition is hold if one of the problems (13) and (14) is strictly feasible and bounded, then the other problem is solvable, and the optimal value in the problems are equal to each other. If both of the problems are feasible, then both are solvable with equal optimal value. Therefore, we can state the optimality conditions as follows. Let x be a feasible solution to (13) and and (v, {y i } m i=1 ) be the feasible solution to (14). The duality gap at the pair (x, (v, is nonnegative and equal to The duality gap is zero if and only if the complementary slackness holds: This means that the duality gap is zero if and only if x is an optimal solution to (13) and (v, {y i } m i=1 ) is an optimal solution to (14). Since (13) is a special form of (8) and (14) can be rewritten as (16) thus the Theorem 3.1 satisfies the conic duality.

4.2.
Conic Duality in Case of Conic Quadratic Optimization. From our recipes on building the dual problem, now we treat the problem (QP ) and (QD) as the standard form of conic quadratic problem and its dual. Now, we interpret for these two problems the strong assumption for conic problem as stated in [2] as follows.
(1) There is no nonzero x that is orthogonal to all rows of all matrices A i and to all vectors a i , i = 1, . . . , k.
(2) Strict feasibility of (QP ) means that there exists x such that (3) Strict feasibility of (QP ) means that there exists a feasible solution y i = (µ i ; τ i ) to the problem such that (4) The strong duality condition is hold if one of the problems (QP ) and (QP ) is strictly feasible and bounded, then the other problem is solvable, and the optimal value in the problems are equal to each other. If both of the problems are feasible, then both are solvable with equal optimal value.
We can state the optimality conditions as follows. Let x be a feasible solution to (QP ) and y = (y 1 ; y 2 ; . . . ; y m ) with y i = (µ i ; τ i ) ∈ L mi , i = 1, . . . , m be the feasible solution to (QD). The duality gap at the pair (x, y) is nonnegative and equal to The duality gap is zero if and only if the complementary slackness holds: This means that the duality gap is zero if and only if x is an optimal solution to (QP ) and y i = (µ i , τ i ) is an optimal solution to (QD).

4.3.
Conic Duality in Case of Semidefinite Optimization. In this subsection, we discuss what we can get from the conic duality theorem in case of semidefinite optimization. Note that the important assumption on a conic problem in the form of (CP ) is said that the matrix A is of full column rank (i.e., its column are linearly independent). This means that in case of semidefinite problem, (1) There is no nontrivial linear combination of the matrices A 1 , . . . , A n is 0.
(2) Strict feasibility of the dual problem (SP ) means that there exists x such that (3) Strict feasibility of (DP ) means that there exists a positive semidefinite Y satisfying Thus, according to the conic duality theorem, if both primal and dual are strictly feasible, both are solvable, the optimal values are equal to each other and the complementary slackness condition is necessary and sufficient for a pair of primal feasible solution x and a dual feasible solution Y to be optimal for the corresponding problems.

Concluding Remarks
It is now well-known that CO is a powerful tool for the mathematical modelling of inherently nonlinear problems. Indeed, the subject thanks its existence to the development of efficient solution methods for CO problems in the last decade. Building the dual of the primal problem of CO is a very important step to make the finding optimal solution. As a suggestion of the future problem to this topic, finding some suitable examples will be interesting to be explored.