{"id":475,"date":"2020-05-30T00:47:00","date_gmt":"2020-05-30T00:47:00","guid":{"rendered":"https:\/\/tensor.agenthub.uk\/?p=475"},"modified":"2024-05-16T06:48:39","modified_gmt":"2024-05-16T06:48:39","slug":"policy-gradient","status":"publish","type":"post","link":"https:\/\/tensorzen.blog\/?p=475","title":{"rendered":"Policy Gradient"},"content":{"rendered":"\n<p>Q Learning \u5148\u5b66\u5230\u4e00\u4e2avalue function\uff0c\u4e4b\u540e\u57fa\u4e8evalue function\u53ef\u4ee5\u5f97\u5230\u6700\u4f18\u7684policy\u3002\u90a3Policy Gradient\u540d\u5b57\u5df2\u7ecf\u5f88\u76f4\u767d\u4e86\uff0c\u76f4\u63a5\u5bf9Policy\u8fdb\u884c\u5efa\u6a21\uff0c\u5c31\u5f88\u76f4\u63a5\uff5e\u5982\u679cPolicy\u662f\u4e00\u4e2a\u795e\u7ecf\u7f51\u7edc\uff0c\u90a3\u4e48\u5b83\u7684\u8f93\u5165\u5c31\u662f\u5f53\u524d\u7684\u73af\u5883\u72b6\u6001(state)\uff0c\u8f93\u51fa\u7684\u662f\u6b64\u65f6\u91c7\u53d6\u6bcf\u4e2a\u52a8\u4f5c\u7684\u6982\u7387\uff0c\u4e8e\u662fpolicy gradient\u81ea\u7136\u7684\u5f15\u5165\u4e86exploration\uff0c\u5c31\u4e0d\u5fc5\u50cfQ Learning\u90a3\u6837\u5f97\u6211\u4eec\u81ea\u5df1\u8bbe\u8ba1explore and exploit\u4e86\u3002<\/p>\n\n\n\n<p>\u5982\u679c$\\rho$\u8868\u793apolicy\u7684\u6027\u80fd\uff0c$\\theta$\u662f\u8fd9\u4e2apolicy\u7684\u6240\u6709\u7cfb\u6570\uff0c\u4e8e\u662f\u66f4\u65b0\u8fd9\u4e2apolicy\u53ef\u4ee5\u8868\u793a\u6210<\/p>\n\n\n\n<p>$$\\Delta \\theta = \\alpha \\frac{\\partial \\rho}{\\partial \\theta}$$<\/p>\n\n\n\n<p>\u5176\u4e2d$\\alpha$\u662flearning rate\uff0c\u8fd9\u6837\u8fed\u4ee3\u4e00\u6bb5\u65f6\u95f4\u5c31\u53ef\u4ee5\u5f97\u5230\u6bd4\u8f83\u7a33\u5b9a\u7684policy\u4e86\uff0c\u90a3\u4e48\u8fd9\u4e2a\u68af\u5ea6\u662f\u600e\u4e48\u7b97\u7684\u5462\uff1f\u5148\u7ed9\u51fa\u7ed3\u8bba\u5427\uff0c\u6bd5\u7adf\u540e\u9762\u7684\u63a8\u5bfc\u8fc7\u7a0b\u4e5f\u672a\u5fc5\u6709\u5174\u8da3\u770b\uff5e<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">\u7ed3\u8bba<\/h2>\n\n\n\n<p>$$\\frac{\\partial \\rho}{\\partial \\theta} = \\sum_{s} d^{\\pi}(s) \\sum_{a} \\frac{\\partial \\pi(s,a)}{\\partial \\theta} Q^{\\pi}(s,a)$$<\/p>\n\n\n\n<p>\u5176\u4e2d$d^{\\pi}(s) = \\lim_{t \\rightarrow \\infty} \\text{Pr}\\{s_t = s | s_0, \\pi\\}$\u662fstationary distribution, \u5927\u6982\u53ef\u4ee5\u7406\u89e3\u6210\u6bcf\u4e2astate\u51fa\u73b0\u7684\u6982\u7387, stationary distribution \u8bc1\u660e\u4e86\u5728Markov Decision Process(MDP)\u4e0b\u53ea\u8981\u6267\u884c\u8db3\u591f\u7684\u6b65\u6570\u6bcf\u79cd\u72b6\u6001\u51fa\u73b0\u7684\u6982\u7387\u4f1a\u7a33\u5b9a\u4e0b\u6765\uff0c\u8fd9\u91cc\u76f4\u63a5\u7406\u89e3\u6210\u5168\u53e5\u4e0a\u6bcf\u4e2astate\u51fa\u73b0\u7684\u6982\u7387\u5e94\u8be5\u6ca1\u95ee\u9898\uff5e<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">\u63a8\u5bfc\u8fc7\u7a0b<\/h2>\n\n\n\n<p>\u63a5\u4e0b\u6765\u6211\u4eec\u770b\u600e\u4e48\u63a8\u5bfc\u51fa\u4e0a\u8ff0\u7ed3\u8bba\u7684\uff0c\u5f00\u59cb\u524d\u5148\u5b9a\u4e49\u51e0\u4e2a\u53d8\u91cf<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>$t \\in {0, 1, 2, &#8230;}$<\/li>\n\n\n\n<li>$s_t \\in S$, \u65f6\u523b$t$\u7684\u72b6\u6001\uff0c$S$\u662f\u72b6\u6001(state)\u7a7a\u95f4<\/li>\n\n\n\n<li>$a_t \\in A$\uff0c\u65f6\u523b$t$\u7684\u52a8\u4f5c\uff0c$A$\u662f\u52a8\u4f5c(action)\u7a7a\u95f4<\/li>\n\n\n\n<li>$r_t \\in R$ \u72b6\u6001$s_t$\u4e0b\u6267\u884c\u52a8\u4f5c$a_t$\u7684\u6536\u76ca<\/li>\n<\/ul>\n\n\n\n<p>\u8fd8\u6709\u51e0\u4e2a\u5047\u8bbe<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>\u73af\u5883\u7684\u72b6\u6001\u8f6c\u79fb\u6982\u7387(state transition probabilities)$$P_{ss&#8217;}^{a} = \\text{Pr}\\{s_{t+1} = {s}&#8217; | s_t = s, a_t = a\\}$$\u610f\u601d\u662f\u8bf4\u72b6\u6001$s_t=s$\u4e0b\u6267\u884c\u52a8\u4f5c$a_t=a$\u540e\u8f6c\u79fb\u5230\u72b6\u6001${s}&#8217;$\u7684\u6982\u7387<\/li>\n\n\n\n<li>\u5f53\u4ece\u4e00\u4e2a\u72b6\u6001\u8f6c\u79fb\u5230\u53e6\u4e00\u4e2a\u72b6\u6001\u73af\u5883\u4f1a\u7ed9\u4f60\u4e00\u4e2a\u53cd\u9988(reward)$$R_{s}^{a} = E \\{ r_{t+1} | s_t=s, a_t=a \\}, \\forall s, {s}&#8217; \\in S, a \\in A$$\u6ca1\u9519\uff0c\u5b83\u662f\u4e00\u4e2a\u5747\u503c\uff0c\u56e0\u4e3a\u4e0a\u9762\u72b6\u6001\u8f6c\u79fb\u6211\u4eec\u5b9a\u4e49\u7684\u662f\u6982\u7387\u3002<\/li>\n\n\n\n<li>agent\u6bcf\u4e00\u6b65\u4e4b\u884c\u7684\u52a8\u4f5c\u7531policy\u51b3\u5b9a$$\\pi(s, a, \\theta) = Pr \\{ a_t=a | s_t=s, \\theta \\}, \\forall s \\in S, a \\in A$$ \u4e00\u822c\u4f1a\u7701\u7565\u6389$\\theta$\u5199\u6210$\\pi(s,a)$<\/li>\n<\/ul>\n\n\n\n<p class=\"has-medium-font-size\"><strong>Policy \u6027\u80fd\u8bc4\u4f30<\/strong><\/p>\n\n\n\n<p>\u539f\u59cb\u8bba\u6587\u79cd\u63d0\u4f9b\u4e86\u4e24\u79cd\u8bc4\u4f30\u67d0\u4e2apolicy\u6027\u80fd\u7684\u8ba1\u7b97\u65b9\u5f0f<\/p>\n\n\n\n<p>1. average-reward formulation<\/p>\n\n\n\n<p>$$\\rho_{\\pi} = \\lim _{n \\rightarrow \\infty} \\frac{1}{n} E \\{ r_1  + r_2 + r_3 + &#8230; + r_n | \\pi \\} = \\sum_{s}d^{\\pi}(s) \\sum_{a} \\pi(s, a)R_{s}^{a}$$<\/p>\n\n\n\n<p>\u516c\u5f0f\u6bd4\u8f83\u597d\u7406\u89e3\uff0c\u72b6\u6001$s$\u4e0b\u6267\u884c\u6bcf\u4e2a\u52a8\u4f5c\u6709\u4e0d\u540c\u7684\u6536\u76ca$R_{s}^{a}$\uff0c\u8be5\u72b6\u6001\u4e0b\u6267\u884c\u6bcf\u4e2a\u52a8\u4f5c\u7684\u6982\u7387\u662f$\\pi(s,a)$\uff0c\u4e8e\u662f\u53ef\u4ee5\u8ba1\u7b97\u51fa\u8be5\u72b6\u6001\u7684\u5e73\u5747\u6536\u76ca$\\pi(s, a)R_{s}^{a}$\uff0c\u6211\u4eec\u4e5f\u77e5\u9053\u6bcf\u4e2a\u72b6\u6001\u51fa\u73b0\u7684\u6982\u7387$d^{\\pi}(s)$\uff0c\u4e8e\u662f\u53ef\u4ee5\u5f97\u5230\u4e00\u4e2a\u5e73\u5747\u6536\u76ca\u3002\u6bd4\u8f83\u5f3a\u7684policy\u5728\u72b6\u6001$s$\u4e0b\u53ef\u4ee5\u9884\u6d4b\u51fa\u53ef\u4ee5\u83b7\u5f97\u6700\u9ad8\u6536\u76ca\u7684\u90a3\u4e2a\u52a8\u4f5c\uff0c\u6211\u4eec\u53d6\u6837\u8fd9\u4e2a\u52a8\u4f5c\u7684\u6982\u7387\u4e5f\u6700\u5927\uff0c\u57fa\u4e8e\u8fd9\u4e2apolicy\u53ef\u4ee5\u5b9a\u4e49\u67d0\u4e2a\u52a8\u4f5c\u7684value:<\/p>\n\n\n\n<p>$$Q^{\\pi}(s,a) = \\sum_{t=1}^{\\infty}E\\{ r_t &#8211; \\rho (\\pi) | s_0=s, a_0 =a, \\pi \\}, \\forall s \\in S, a \\in A$$<\/p>\n\n\n\n<p>\u72b6\u6001$s$\u4e0b\u6267\u884c\u52a8\u4f5c$a$\u7684value\u662f\u6267\u884c\u8be5\u52a8\u4f5c\u540e\u5f15\u53d1\u7684\u540e\u7eed\u6240\u6709\u6b65\u9aa4\u7684\u6536\u76ca\u51cf\u53bbpolicy\u7684\u6027\u80fd\uff0c\u5373\u51cf\u53bb\u6bcf\u4e2a\u72b6\u6001\u7684\u5e73\u5747\u6536\u76ca\uff0c\u8bc4\u4ef7\u7684\u662f\u72b6\u6001$s$\u4e0b\u6267\u884c\u52a8\u4f5c$s$\u7684value\u662f\u4e0d\u662f\u5f3a\u4e8e\u8fd9\u4e2apolicy\u5728\u6bcf\u4e2a\u72b6\u6001\u4e0b\u7684\u5e73\u5747\u80fd\u529b\u3002<\/p>\n\n\n\n<p>2. the start-state formulation<\/p>\n\n\n\n<p>$$\\rho(\\pi) = E \\left \\{ \\sum_{t=1}^{\\infty} \\gamma^{t-1}r_t|s_0,\\pi \\right \\}$$<\/p>\n\n\n\n<p>\u7528\u521d\u59cb\u72b6\u6001$s_0$\u7684discount reward\u4f5c\u4e3apolicy\u7684\u6027\u80fd\uff0c\u8fd9\u79cd\u65b9\u5f0f\u52a8\u4f5c\u7684value\uff1a<\/p>\n\n\n\n<p>$$Q^{\\pi}(s, a) = E \\left \\{ \\sum_{k=1}^{\\infty} \\gamma^{k-1} r_{t+k} | s_t=s, a_t=a, \\pi \\right \\}$$<\/p>\n\n\n\n<p>\u72b6\u6001$s$\u4e0b\u6267\u884c\u52a8\u4f5c$a$\u7684value\u662f\u5f15\u53d1\u7684\u540e\u7eed\u6240\u6709\u6b65\u9aa4\u7684\u6536\u76ca\u7684discount reward\u3002<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">PG\u7684\u8bc1\u660e<\/h2>\n\n\n\n<p>Policy Gradient\u7684\u8bc1\u660e\u5728\u8bba\u6587\u7684Appendix\u90e8\u5206\uff0c\u6211\u8fd9\u91cc\u53ea\u8bc1\u660e\u5f53performance\u662f\u5e73\u5747\u6536\u76ca\u7684\u60c5\u51b5\uff0c\u6bcf\u4e00\u6b65\u6211\u90fd\u52a0\u4e0a\u4e86\u89e3\u91ca\uff5e<\/p>\n\n\n\n<p>$$\\frac{\\partial V^{\\pi}(s)}{\\partial \\theta} = \\frac{\\partial}{\\partial \\theta}\\sum_{a}\\pi(s,a)Q^{\\pi}(s,a), \\forall s \\in S$$<\/p>\n\n\n\n<p>\u5b9a\u4e49$V^{\\pi}(s)$\u662f\u72b6\u6001$s$\u7684value\uff0c\u5f88\u663e\u7136\u5b83\u662f\u8fd9\u4e2a\u72b6\u6001\u4e0b\u6267\u884c\u6267\u884c\u6bcf\u4e2a\u52a8\u4f5c\u540e\u7684value\u7684\u5747\u503c\uff0c\u6839\u636e\u5bfc\u6570\u4e58\u6cd5\u5206\u89e3\u4e00\u4e0b<\/p>\n\n\n\n<p>$$=\\sum_{a} \\left [ \\frac{\\partial \\pi(s,a)}{\\partial \\theta}Q^{\\pi}(s,a) + \\pi(s,a)\\frac{\\partial}{\\partial \\theta}Q^{\\pi}(s,a) \\right ]$$<\/p>\n\n\n\n<p>\u6839\u636e\u4e0a\u9762$Q^{\\pi}(s,a)$\u7684\u5b9a\u4e49\u53ef\u4ee5\u91cd\u5199\u6210$R_{s}^{a} -\\rho(\\pi) + \\sum_{s&#8217;} P_{ss&#8217;}^{a}V^{\\pi}(s&#8217;)$\uff0c\u4e0d\u8981\u8bd5\u56fe\u53bb\u63a8\u5bfc\u5b83\uff5e\u611f\u53d7\u5b83\uff5e\u72b6\u6001$s$\u4e0b\u6267\u884c\u4e00\u4e2a\u52a8\u4f5c\u4f1a\u5f97\u5230reward$R_{s}^{a}$\uff0c\u63a5\u7740\u4f1a\u8f6c\u79fb\u5230\u53e6\u4e00\u4e2a\u72b6\u6001(${s}&#8217;$)\uff0c\u6bcf\u4e2a$s&#8217;$\u90fd\u6709\u81ea\u5df1\u7684value\uff0c\u8f6c\u79fb\u6982\u7387$P_{ss&#8217;}^{a}$\u6211\u4eec\u77e5\u9053\uff0c\u4e8e\u662f\u53ef\u4ee5\u5f97\u5230\u4e0b\u4e00\u6b65\u7684\u5e73\u5747value<\/p>\n\n\n\n<p>$$=\\sum_{a}  \\left [  \\frac{\\partial \\pi(s,a)}{\\partial \\theta}Q^{\\pi}(s,a) + \\pi(s,a)\\frac{\\partial}{\\partial \\theta} \\left [  R_{s}^{a} -\\rho(\\pi) + \\sum_{s&#8217;} P_{ss&#8217;}^{a}V^{\\pi}(s&#8217;) \\right ]\\right ]$$<\/p>\n\n\n\n<p>$R_{s}^{a}$\u548c$P_{ss&#8217;}^{a}$\u4e2d\u6ca1\u6709$\\theta$\uff0c\u518d\u7b97\u4e00\u6b65<\/p>\n\n\n\n<p>$$=\\sum_{a}  \\left [  \\frac{\\partial \\pi(s,a)}{\\partial \\theta}Q^{\\pi}(s,a) + \\pi(s,a) \\left [- \\frac{\\partial \\rho}{\\partial \\theta} + \\sum_{s&#8217;}P_{ss&#8217;}^{a}\\frac{\\partial V^{\\pi}(s&#8217;)}{\\partial \\theta} \\right ] \\right ]$$<\/p>\n\n\n\n<p>\u6574\u7406\u4e0b<\/p>\n\n\n\n<p>$$=\\sum_{a}  \\left [  \\frac{\\partial \\pi(s,a)}{\\partial \\theta}Q^{\\pi}(s,a) &#8211; \\pi(s,a) \\frac{\\partial \\rho}{\\partial \\theta} +  \\pi(s,a) \\sum_{s&#8217;}P_{ss&#8217;}^{a}\\frac{\\partial V^{\\pi}(s&#8217;)}{\\partial \\theta}   \\right ]$$<\/p>\n\n\n\n<p>$\\rho$\u662f\u4e00\u4e2a\u6807\u91cf\uff0c\u8861\u91cf$\\pi$\u7684\u6027\u80fd\uff0c\u4e8e\u662f\u4e5f\u53ea\u8ddf\u9009\u62e9\u4ec0\u4e48\u6837$\\pi$\u6709\u5173\uff0c\u8ddf\u52a8\u4f5c$a$\u6ca1\u5565\u5173\u7cfb\uff0c\u4e8e\u662f\u4e2d\u95f4\u90a3\u6bb5$ -\\pi(s,a) \\frac{\\partial \\rho}{\\partial \\theta}$\u548c\u5b83\u5916\u9762\u7684$\\sum_{a}$\u91cd\u5199\u6210$-\\frac{\\partial \\rho}{\\partial \\theta} \\sum_{a}\\pi(s,a)$,\u72b6\u6001$\\pi(s,a)$\u662f\u4e2a\u6982\u7387\uff0c\u91c7\u53d6\u6240\u6709\u52a8\u4f5c\u7684\u53ef\u80fd\u6027\u662f1\uff0c\u6700\u540e\u4f1a\u5269\u4e0b\u6211\u4eec\u60f3\u8981\u7684$-\\frac{\\partial \\rho}{\\partial \\theta}$\uff0c\u628a\u5b83\u653e\u5de6\u8fb9<\/p>\n\n\n\n<p>$$\\frac{\\partial \\rho}{\\partial \\theta} = \\sum_{a}  \\left [  \\frac{\\partial \\pi(s,a)}{\\partial \\theta}Q^{\\pi}(s,a)  + \\pi(s,a) \\sum_{s&#8217;}P_{ss&#8217;}^{a}\\frac{\\partial V^{\\pi}(s&#8217;)}{\\partial \\theta}   \\right ] &#8211; \\frac{\\partial V^{\\pi}(s)}{\\partial \\theta} $$<\/p>\n\n\n\n<p>\u540c\u65f6\u4e58\u4e0a$\\sum_{s}d^{\\pi}(s) = 1$<\/p>\n\n\n\n<p>$$\\sum_{s}d^{\\pi}(s) \\frac{\\partial \\rho}{\\partial \\theta} = \\sum_{s}d^{\\pi}(s) \\sum_{a}  \\frac{\\partial \\pi(s,a)}{\\partial \\theta}Q^{\\pi}(s,a)+ \\sum_{s}d^{\\pi}(s) \\sum_{a} \\pi(s,a) \\sum_{s&#8217;}P_{ss&#8217;}^{a}\\frac{\\partial V^{\\pi}(s&#8217;)}{\\partial \\theta} &#8211; \\sum_{s}d^{\\pi}(s)  \\frac{\\partial V^{\\pi}(s)}{\\partial \\theta} $$<\/p>\n\n\n\n<p>\u539f\u8bba\u6587\u4e2d\u4ece\u8fd9\u4e00\u6b65\u5230\u4e0b\u4e00\u6b65\u5f88\u5938\u5f20\u7684\u76f4\u63a5\u5316\u7b80\u4e86\uff0c\u6211\u4eec\u8fd8\u662f\u4e00\u70b9\u70b9\u6765\uff0c\u7b49\u5f0f\u53f3\u8fb9\u4e2d\u95f4\u7684\u90e8\u5206<\/p>\n\n\n\n<p>$$ \\sum_{s}d^{\\pi}(s) \\sum_{a} \\pi(s,a) \\sum_{s&#8217;}P_{ss&#8217;}^{a}\\frac{\\partial V^{\\pi}(s&#8217;)}{\\partial \\theta}$$<\/p>\n\n\n\n<p>stationary distribution $d^{\\pi}(s)$\u6ee1\u8db3<\/p>\n\n\n\n<p>$$d^{\\pi}(\\hat{s}) = \\sum_{s}d^{\\pi}(s)\\sum_{a}\\pi(s,a)P_{s\\hat{s}}^{a}$$<\/p>\n\n\n\n<p>\u76f4\u89c9\u4e0a\u7406\u89e3\uff0c\u5373\u4efb\u610f\u72b6\u6001$\\hat{s}$\u51fa\u73b0\u7684\u6982\u7387\uff0c\u7b49\u4e8e\u6240\u6709\u72b6\u6001\u8f6c\u79fb\u5230\u8fd9\u4e2a\u72b6\u6001\u7684\u6982\u7387\u7684\u5747\u503c\uff0c\u4e8e\u662f\u904d\u5386\u6240\u6709\u72b6\u6001\u518d\u904d\u5386\u6bcf\u4e2a\u72b6\u6001\u4e0b\u6267\u884c\u67d0\u4e2a\u52a8\u4f5c\u540e\u8f6c\u79fb\u5230\u72b6\u6001$\\hat{s}$\u7684\u6982\u7387\uff0c\u7b97\u51fa\u6765\u5c31\u662f\u72b6\u6001$s&#8217;$\u51fa\u73b0\u7684\u6982\u7387\u3002<\/p>\n\n\n\n<p>\u7531\u4e8e$\\sum$\u6ee1\u8db3\u4ea4\u6362\u5f8b\u548c\u7ed3\u5408\u5f8b<\/p>\n\n\n\n<p>$$\\sum_{s}d^{\\pi}(s) \\sum_{a} \\pi(s,a) \\sum_{s&#8217;}P_{ss&#8217;}^{a}\\frac{\\partial V^{\\pi}(s&#8217;)}{\\partial \\theta} = \\sum_{s&#8217;} \\left [ \\sum_{s}d^{\\pi}(s) \\sum_{a} \\pi(s,a) P_{ss&#8217;}^{a}\\frac{\\partial V^{\\pi}(s&#8217;)}{\\partial \\theta} \\right ]$$<\/p>\n\n\n\n<p>\u4e8e\u662f\u4e2d\u95f4\u51d1\u51fa\u4e00\u4e2a\u8f6c\u79fb\u5230\u72b6\u6001$s&#8217;$\u7684\u6982\u7387<\/p>\n\n\n\n<p>$$\\sum_{s&#8217;} \\left [ \\sum_{s}d^{\\pi}(s) \\sum_{a} \\pi(s,a) P_{ss&#8217;}^{a}\\right ] \\frac{\\partial V^{\\pi}(s&#8217;)}{\\partial \\theta} = \\sum_{s&#8217;}d^{\\pi}(s&#8217;)  \\frac{\\partial V^{\\pi}(s&#8217;)}{\\partial \\theta}$$<\/p>\n\n\n\n<p>\u4e8e\u662f\u6700\u7ec8\u516c\u5f0f\u53d8\u6210\u4e86<\/p>\n\n\n\n<p>$$\\sum_{s}d^{\\pi}(s) \\frac{\\partial \\rho}{\\partial \\theta}  = \\sum_{s}d^{\\pi}(s) \\sum_{a}  \\frac{\\partial \\pi(s,a)}{\\partial \\theta}Q^{\\pi}(s,a)+ \\sum_{s&#8217;}d^{\\pi}(s&#8217;)  \\frac{\\partial V^{\\pi}(s&#8217;)}{\\partial \\theta} &#8211; \\sum_{s}d^{\\pi}(s)  \\frac{\\partial V^{\\pi}(s)}{\\partial \\theta} $$<\/p>\n\n\n\n<p>\u540e\u9762\u4e24\u9879\u7b49\u4ef7\u76f4\u63a5\u53bb\u9664\uff0c\u518d\u53bb\u6389\u5de6\u8fb9\u7684$\\sum_{s}d^{\\pi}(s)$\uff0c\u6700\u7ec8\u7ed3\u679c\u5c31\u51fa\u6765\u4e86<\/p>\n\n\n\n<p>$$ \\frac{\\partial \\rho}{\\partial \\theta} = \\sum_{s}d^{\\pi}(s) \\sum_{a}  \\frac{\\partial \\pi(s,a)}{\\partial \\theta}Q^{\\pi}(s,a)$$<\/p>\n\n\n\n<p>\u539f\u6587\u6709\u4e00\u53e5\u611f\u89c9\u6bd4\u8f83\u70b9\u9898\u7684\u8bdd<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>In any event, the key aspect of both expressions for the gradient is that their are no terms of the from $\\frac{\\partial d^{\\pi}(s)}{\\partial \\theta}$: the effect of policy changes on the distribution of states does not appear. This is convenient for approximating the gradient by sampling.<\/p>\n<\/blockquote>\n","protected":false},"excerpt":{"rendered":"<p>Q Learning \u5148\u5b66\u5230\u4e00\u4e2avalue function\uff0c\u4e4b\u540e\u57fa\u4e8evalue function\u53ef\u4ee5\u5f97\u5230\u6700\u4f18\u7684policy\u3002\u90a3Policy Gradient\u540d\u5b57\u5df2\u7ecf\u5f88\u76f4\u767d\u4e86\uff0c\u76f4\u63a5\u5bf9Policy\u8fdb\u884c\u5efa\u6a21\uff0c\u5c31\u5f88\u76f4\u63a5\u3002 \u6211\u4eec\u6765\u770b\u4e0b\u539f\u59cb\u8bba\u6587\u662f\u600e\u4e48\u63a8\u5bfc\u7684\u3002<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[16,18],"tags":[],"class_list":["post-475","post","type-post","status-publish","format-standard","hentry","category-base","category-reinforcement-learning"],"_links":{"self":[{"href":"https:\/\/tensorzen.blog\/index.php?rest_route=\/wp\/v2\/posts\/475","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/tensorzen.blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/tensorzen.blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/tensorzen.blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/tensorzen.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=475"}],"version-history":[{"count":60,"href":"https:\/\/tensorzen.blog\/index.php?rest_route=\/wp\/v2\/posts\/475\/revisions"}],"predecessor-version":[{"id":521,"href":"https:\/\/tensorzen.blog\/index.php?rest_route=\/wp\/v2\/posts\/475\/revisions\/521"}],"wp:attachment":[{"href":"https:\/\/tensorzen.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=475"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/tensorzen.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=475"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/tensorzen.blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=475"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}