{"id":76,"date":"2023-12-27T08:05:40","date_gmt":"2023-12-27T08:05:40","guid":{"rendered":"https:\/\/tensor.agenthub.uk\/?p=76"},"modified":"2024-04-24T08:46:15","modified_gmt":"2024-04-24T08:46:15","slug":"mathematical-notation","status":"publish","type":"post","link":"https:\/\/tensorzen.blog\/?p=76","title":{"rendered":"Mathematical notation"},"content":{"rendered":"\n<p>Vectors are denoted by lower case bold Roman letters such as $\\textbf{x}$, and all vectos are assumed to be column vectors.<\/p>\n\n\n\n<p>A superscript $T$ denotes the transpose of a matrix or vector, so that $\\textbf{x}^T$ will be a row vector.<\/p>\n\n\n\n<p>Uppercase bold roman letters, such as $\\textbf{M}$, denote matrices.<\/p>\n\n\n\n<p>The notation $(w_1,\u2026,w_M)$ denotes a row vector with $M$ elements, while the corresponding column vector is written as $\\textbf{w} = (w_1, \u2026, w_M)^T$.<\/p>\n\n\n\n<p>The notation $[a,b]$ is used to denote the closed interval from a to b, that is the interval including the values a and b themselves, while $(a, b)$ denotes the corresponding open interval, that is the interval excluding a and b.<\/p>\n\n\n\n<p>The $M \\times M$ identity matrix (aslo know as the unit matrix) is denoted $\\text{I}_M$, which will be abbreviated to $\\textbf{I}$ where there is no ambiguity about it dimensionality.<\/p>\n\n\n\n<p>The notation of $g(x) = O(f(x))$ denotes that $|f(x)\/g(x)|$ is bounded as $x \\rightarrow \\infty$. For instance if $g(x) = 3x^2+2$, then $g(x)=O(x^2)$.<\/p>\n\n\n\n<p>The expectation of a function $f(x,y)$ with respect to a random variable $x$ is denoted by $E_{x}[f(x,y)]$. In situations where there is no ambiguity as to which varaible is being averaged over, this will be simplified by omitting the suffix, for instance $E[x]$. If the distribution of $x$ is conditioned on another variable $z$, then the corresponding conditional expectation will be written $E_{x}[f(x|z)]$.  Similarly, the variance is denoted $\\text{var}[f(x)]$, and for vector variables the convariance is written $\\text{cov}[\\textbf{x}, \\textbf{y}]$. We shall also use $\\text{cov}[\\textbf{x}]$ as a shorthand notation for $\\text{cov}[\\textbf{x}, \\textbf{x}]$. <\/p>\n\n\n\n<p>If we have $N$ values $x_1,&#8230;,x_n$ of D-dimensional vectors $\\textbf{x} = (x_1, &#8230;,x_D)^T$, we can conbine the observations into a data matrix $\\textbf{X}$ in which the $n^{th}$ row of $\\textbf{X}$ corresponds to the row vector $x_n^T$. Thus the $n, i$ element of $\\textbf{X}$ corresponds to the $i^{th}$ element of the $n^{th}$ observation $\\textbf{x}_n$.  For the case of one-dimensional varaibles we shall denote such a matrix by $\\mathbf{x}$, which is a column vector whose $n^{th}$ element is $x_n$. Note that $\\mathbf{x}$(which has dimensionality $N$) usea a different typeface to distinguish it from $\\textbf{x}$ (which has dimensionality $D$).<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Vectors are denoted by lower case bold Roman letters such as $\\textbf{x}$, and all vectos are assumed to be column vectors. A superscript $T$ denotes the transpose of [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[14,25,4,15],"tags":[],"class_list":["post-76","post","type-post","status-publish","format-standard","hentry","category-book","category-in-english","category-machine-learning","category-prml"],"_links":{"self":[{"href":"https:\/\/tensorzen.blog\/index.php?rest_route=\/wp\/v2\/posts\/76","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/tensorzen.blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/tensorzen.blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/tensorzen.blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/tensorzen.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=76"}],"version-history":[{"count":4,"href":"https:\/\/tensorzen.blog\/index.php?rest_route=\/wp\/v2\/posts\/76\/revisions"}],"predecessor-version":[{"id":85,"href":"https:\/\/tensorzen.blog\/index.php?rest_route=\/wp\/v2\/posts\/76\/revisions\/85"}],"wp:attachment":[{"href":"https:\/\/tensorzen.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=76"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/tensorzen.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=76"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/tensorzen.blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=76"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}