-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathrawlivedata.json
1 lines (1 loc) · 55 KB
/
rawlivedata.json
1
[{"title": "Determine the values of x for which the linear approximation is accurate to within 0.1.", "body_markdown": "So I've got a function...\r\n\r\n$\\frac{1}{(1+2x)^4}$ \r\n\r\n...with the linear approximation...\r\n\r\n$1-8x$\r\n\r\nFor all values of x where the linear approximation is accurate within 0.1, then surely we subtract the approximation from the function and set up an inequality.\r\n\r\n$-0.1 <= \\frac{1}{(1+2x)^4} - (1-8x) <= 0.1$\r\n\r\nI can't work out how to progress from here though.\r\n\r\nMultiplying out the brackets seems like a nightmare, and I don't think it will help? Any ideas?\r\n", "body": "<p>So I've got a function...</p>\n\n<p>$\\frac{1}{(1+2x)^4}$ </p>\n\n<p>...with the linear approximation...</p>\n\n<p>$1-8x$</p>\n\n<p>For all values of x where the linear approximation is accurate within 0.1, then surely we subtract the approximation from the function and set up an inequality.</p>\n\n<p>$-0.1 <= \\frac{1}{(1+2x)^4} - (1-8x) <= 0.1$</p>\n\n<p>I can't work out how to progress from here though.</p>\n\n<p>Multiplying out the brackets seems like a nightmare, and I don't think it will help? Any ideas?</p>\n", "question_id": 756495, "owner": {"profile_image": "http://i.stack.imgur.com/IRrQA.jpg?s=128&g=1", "reputation": 89, "display_name": "Wolff", "user_id": 110890, "link": "http://math.stackexchange.com/users/110890/wolff", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756495/determine-the-values-of-x-for-which-the-linear-approximation-is-accurate-to-with", "tags": ["inequality", "factoring", "linear-approximation"], "creation_date": 1397662919}, {"title": "Fair rolling fo a six sided die", "body_markdown": "one die is rolled 1=$25; 2=$5; 3=$0; 4=-$10; 5=-$10; 6=-$15. How much would I need to pay , or be paid, to make the game fair?", "body": "<p>one die is rolled 1=$25; 2=$5; 3=$0; 4=-$10; 5=-$10; 6=-$15. How much would I need to pay , or be paid, to make the game fair?</p>\n", "question_id": 756491, "owner": {"profile_image": "https://www.gravatar.com/avatar/85047d6304f7fa6a7436ddc714da2617?s=128&d=identicon&r=PG&f=1", "reputation": 1, "display_name": "user143600", "user_id": 143600, "link": "http://math.stackexchange.com/users/143600/user143600", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756491/fair-rolling-fo-a-six-sided-die", "tags": ["probability"], "creation_date": 1397662579}, {"title": "Showing $\\sin{\\frac{\\pi}{13}} \\cdot \\sin{\\frac{2\\pi}{13}} \\cdot \\sin{\\frac{3\\pi}{13}} \\cdots \\sin{\\frac{6\\pi}{13}} = \\frac{\\sqrt{13}}{64}$", "body_markdown": "I would like to show that \r\n\r\n$$ \\sin{\\frac{\\pi}{13}} \\cdot \\sin{\\frac{2\\pi}{13}} \\cdot \\sin{\\frac{3\\pi}{13}} \\cdots \\sin{\\frac{6\\pi}{13}} = \\frac{\\sqrt{13}}{64} $$\r\n\r\nI've been working on this for a few days. I've used product-to-sum formulas, writing the sines in their exponential form, etc. When I used the product-to-sum formulas, I'd get a factor of $1/64$, I obtained the same with writing the sines in their exponential form. I'd always get $1/64$ somehow, but never the $\\sqrt{13}$.\r\n\r\nI've come across this: http://mathworld.wolfram.com/TrigonometryAnglesPi13.html, (look at the 10th equation). It says that this comes from one of Newton's formulas and links to something named "Newton-Girard formulas", which I cannot understand. :(\r\n\r\nThanks in advance.", "body": "<p>I would like to show that </p>\n\n<p>$$ \\sin{\\frac{\\pi}{13}} \\cdot \\sin{\\frac{2\\pi}{13}} \\cdot \\sin{\\frac{3\\pi}{13}} \\cdots \\sin{\\frac{6\\pi}{13}} = \\frac{\\sqrt{13}}{64} $$</p>\n\n<p>I've been working on this for a few days. I've used product-to-sum formulas, writing the sines in their exponential form, etc. When I used the product-to-sum formulas, I'd get a factor of $1/64$, I obtained the same with writing the sines in their exponential form. I'd always get $1/64$ somehow, but never the $\\sqrt{13}$.</p>\n\n<p>I've come across this: <a href=\"http://mathworld.wolfram.com/TrigonometryAnglesPi13.html\" rel=\"nofollow\">http://mathworld.wolfram.com/TrigonometryAnglesPi13.html</a>, (look at the 10th equation). It says that this comes from one of Newton's formulas and links to something named \"Newton-Girard formulas\", which I cannot understand. :(</p>\n\n<p>Thanks in advance.</p>\n", "question_id": 756489, "owner": {"profile_image": "https://www.gravatar.com/avatar/95c748e809936567f2cdf0c86486d59f?s=128&d=identicon&r=PG", "reputation": 1, "display_name": "The Conjuring", "user_id": 138461, "link": "http://math.stackexchange.com/users/138461/the-conjuring", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756489/showing-sin-frac-pi13-cdot-sin-frac2-pi13-cdot-sin-frac3-pi", "tags": ["trigonometry", "polynomials", "products"], "creation_date": 1397662526}, {"title": "Differentiability of random processes.", "body_markdown": "I know the appropriate criterions for mean-square differentiability of random processes. These criterions are connected with covariance function of a process. Are there any criterions for differentiability with probability one and in distribution for random processes?", "body": "<p>I know the appropriate criterions for mean-square differentiability of random processes. These criterions are connected with covariance function of a process. Are there any criterions for differentiability with probability one and in distribution for random processes?</p>\n", "question_id": 756486, "owner": {"profile_image": "https://www.gravatar.com/avatar/faf37c07fe019d2f87c5db0b48a10c46?s=128&d=identicon&r=PG&f=1", "reputation": 29, "display_name": "restrest", "user_id": 140670, "link": "http://math.stackexchange.com/users/140670/restrest", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756486/differentiability-of-random-processes", "tags": ["probability", "convergence", "stochastic-processes"], "creation_date": 1397662353}, {"title": "How to find coordinates of the radius point of a arc", "body_markdown": "Given: Coordinates for each end of the arc, angle of arc, radius length. \r\n\r\nHow do I find the coordinates of the Radius Point?", "body": "<p>Given: Coordinates for each end of the arc, angle of arc, radius length. </p>\n\n<p>How do I find the coordinates of the Radius Point?</p>\n", "question_id": 756480, "owner": {"profile_image": "https://www.gravatar.com/avatar/b28d240a935db342486f32357e237164?s=128&d=identicon&r=PG&f=1", "reputation": 1, "display_name": "user143599", "user_id": 143599, "link": "http://math.stackexchange.com/users/143599/user143599", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756480/how-to-find-coordinates-of-the-radius-point-of-a-arc", "tags": ["trigonometry", "plane-curves"], "creation_date": 1397662028}, {"title": "Average minimum distance", "body_markdown": "I posted a question earlier [here](http://math.stackexchange.com/questions/755623/average-minimum-distance-between-two-random-vectors) and someone pointed out that it might not be possible to find a closed form solution due to the elements of $\\mathbf{g}$ and $\\mathbf{f}$ defined below coming from a Rayleigh distribution. I'm asking the same question again where the distribution is not Rayleigh:\r\n\r\nLet $\\mathbf{y_1} =\\begin{bmatrix}g_1x_1 & g_2x_1 & \\dots & g_Nx_1 \\end{bmatrix}$ and $\\mathbf{y_2} = \\begin{bmatrix} f_1x_2 & f_2x_2 & \\dots & f_Nx_2\\end{bmatrix}$. All the elements of $\\mathbf{g}=\\begin{bmatrix} g_1&g_2 &\\dots &g_N\\end{bmatrix}$ and $\\mathbf{f}= \\begin{bmatrix} f_1 & f_2 & \\dots & f_N\\end{bmatrix}$ are drawn from a complex gaussian distribution with zero mean and variance $\\frac{1}{N}$. $x_1$ and $x_2$ are taken randomly from the set $\\{-1,1\\}$ with equal probability, i.e, $x_1$ can be $1$ or $-1$ with $0.5$ probability and similaraly $x_2$. There are four combinations for $(x_1, x_2)$: $(1,1),(1,-1),(-1,1),(-1,-1)$ which will lead to four combinations for $(\\mathbf{y_1}, \\mathbf{y_2})$ as well. Define $d$ as the euclidean distance between $\\mathbf{y_1} \\text{and } \\mathbf{y_2}$: $d=\\mathbf{|y_1-y_2|}$. I would like to calculate the probability density function of $d_\\min = \\min(d)$. I need the distribution because my goal is to calculate the average minimum distance $E(d_\\min)$. Any help would be greatly appreciated. Thanks.", "body": "<p>I posted a question earlier <a href=\"http://math.stackexchange.com/questions/755623/average-minimum-distance-between-two-random-vectors\">here</a> and someone pointed out that it might not be possible to find a closed form solution due to the elements of $\\mathbf{g}$ and $\\mathbf{f}$ defined below coming from a Rayleigh distribution. I'm asking the same question again where the distribution is not Rayleigh:</p>\n\n<p>Let $\\mathbf{y_1} =\\begin{bmatrix}g_1x_1 & g_2x_1 & \\dots & g_Nx_1 \\end{bmatrix}$ and $\\mathbf{y_2} = \\begin{bmatrix} f_1x_2 & f_2x_2 & \\dots & f_Nx_2\\end{bmatrix}$. All the elements of $\\mathbf{g}=\\begin{bmatrix} g_1&g_2 &\\dots &g_N\\end{bmatrix}$ and $\\mathbf{f}= \\begin{bmatrix} f_1 & f_2 & \\dots & f_N\\end{bmatrix}$ are drawn from a complex gaussian distribution with zero mean and variance $\\frac{1}{N}$. $x_1$ and $x_2$ are taken randomly from the set $\\{-1,1\\}$ with equal probability, i.e, $x_1$ can be $1$ or $-1$ with $0.5$ probability and similaraly $x_2$. There are four combinations for $(x_1, x_2)$: $(1,1),(1,-1),(-1,1),(-1,-1)$ which will lead to four combinations for $(\\mathbf{y_1}, \\mathbf{y_2})$ as well. Define $d$ as the euclidean distance between $\\mathbf{y_1} \\text{and } \\mathbf{y_2}$: $d=\\mathbf{|y_1-y_2|}$. I would like to calculate the probability density function of $d_\\min = \\min(d)$. I need the distribution because my goal is to calculate the average minimum distance $E(d_\\min)$. Any help would be greatly appreciated. Thanks.</p>\n", "question_id": 756479, "owner": {"profile_image": "https://www.gravatar.com/avatar/f861ad331a07c46fecbbdfb3a89f121f?s=128&d=identicon&r=PG", "reputation": 30, "display_name": "user4259", "user_id": 80564, "link": "http://math.stackexchange.com/users/80564/user4259", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756479/average-minimum-distance", "tags": ["probability", "probability-distributions"], "creation_date": 1397661865}, {"title": "Does $\\{A={\\rm core}(A)\\}$ form a topology?", "body_markdown": "I've the following question: If $V$ be an arbitrary (real) vector space and if for any subset $A\\subset V$ we denote by $ {\\rm core}(A)$ the set $\\{a\\in A\\> |\\> \\forall v\\in V \\exists T>0 \\>\\forall \\,|t|<T: a+tv \\in A \\}$. Then $ {\\rm core}(A)$ is the set of all $a\\in A$ such that in every direction $v$ there is a environment of $a$ on the line $a+tv$ which also belongs to $A$ (I hope this set is really call the core of $A$...).\r\n\r\nNow it seems tempting to call set $A$ open if $A={\\rm core}(A)$.\r\nDoes the family $\\{A={\\rm core}(A)\\}$ form a topology on $V$?\r\n\r\n(I think I once read that it does not... But arbitrary unions of sets of this form as well as finite intersections seem to belong this this family as well...)\r\n\r\nThanks a lot in advance for any suggestions", "body": "<p>I've the following question: If $V$ be an arbitrary (real) vector space and if for any subset $A\\subset V$ we denote by $ {\\rm core}(A)$ the set $\\{a\\in A\\> |\\> \\forall v\\in V \\exists T>0 \\>\\forall \\,|t|<T: a+tv \\in A \\}$. Then $ {\\rm core}(A)$ is the set of all $a\\in A$ such that in every direction $v$ there is a environment of $a$ on the line $a+tv$ which also belongs to $A$ (I hope this set is really call the core of $A$...).</p>\n\n<p>Now it seems tempting to call set $A$ open if $A={\\rm core}(A)$.\nDoes the family $\\{A={\\rm core}(A)\\}$ form a topology on $V$?</p>\n\n<p>(I think I once read that it does not... But arbitrary unions of sets of this form as well as finite intersections seem to belong this this family as well...)</p>\n\n<p>Thanks a lot in advance for any suggestions</p>\n", "question_id": 756475, "owner": {"profile_image": "https://www.gravatar.com/avatar/341f9846dc6bfc0a531e074a507a3e1e?s=128&d=identicon&r=PG&f=1", "reputation": 1, "display_name": "user143595", "user_id": 143595, "link": "http://math.stackexchange.com/users/143595/user143595", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756475/does-a-rm-corea-form-a-topology", "tags": ["general-topology"], "creation_date": 1397661736}, {"title": "Shapley value regression documentation", "body_markdown": "Can anyone point me to comprehensive description on shapley value regression? I've tried to google it, but i didn't found any book or papers on this regression algorithm, only scraps of disjointed info.\r\n\r\nThank you.", "body": "<p>Can anyone point me to comprehensive description on shapley value regression? I've tried to google it, but i didn't found any book or papers on this regression algorithm, only scraps of disjointed info.</p>\n\n<p>Thank you.</p>\n", "question_id": 756474, "owner": {"profile_image": "https://www.gravatar.com/avatar/de598d2d0510f0c56b60c986c0c7fb99?s=128&d=identicon&r=PG&f=1", "reputation": 1, "display_name": "user143597", "user_id": 143597, "link": "http://math.stackexchange.com/users/143597/user143597", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756474/shapley-value-regression-documentation", "tags": ["statistics", "regression", "machine-learning"], "creation_date": 1397661721}, {"title": "Bott formula for projective bundles", "body_markdown": "For a projective space one has Bott formula to compute $h^q(\u2119^n,\u03a9^p(k))$, where $\u03a9^p(k)$ is the k-twisted sheaf of sections in the p-th power of the cotangent bundle of $\u2119^n$. I am wondering if there is a similar formula to compute these dimensions when you consider a projective bundle instead of a projective space.", "body": "<p>For a projective space one has Bott formula to compute $h^q(\u2119^n,\u03a9^p(k))$, where $\u03a9^p(k)$ is the k-twisted sheaf of sections in the p-th power of the cotangent bundle of $\u2119^n$. I am wondering if there is a similar formula to compute these dimensions when you consider a projective bundle instead of a projective space.</p>\n", "question_id": 756473, "owner": {"profile_image": "https://www.gravatar.com/avatar/a9e783f06ead82702f97dc5dfd0fa4f3?s=128&d=identicon&r=PG", "reputation": 1, "display_name": "boristenes", "user_id": 143598, "link": "http://math.stackexchange.com/users/143598/boristenes", "user_type": "unregistered"}, "link": "http://math.stackexchange.com/questions/756473/bott-formula-for-projective-bundles", "tags": ["algebraic-geometry"], "creation_date": 1397661700}, {"title": "Groups/Sets Notation Question", "body_markdown": "![enter image description here][1]\r\n\r\n\r\n [1]: http://i.stack.imgur.com/J5atX.png\r\n\r\nSimple question: But what does the sigma small Y mean, does it just represent a group? Also have seen this with numbers, and not quite sure what it means.\r\n\r\nThanks", "body": "<p><img src=\"http://i.stack.imgur.com/J5atX.png\" alt=\"enter image description here\"></p>\n\n<p>Simple question: But what does the sigma small Y mean, does it just represent a group? Also have seen this with numbers, and not quite sure what it means.</p>\n\n<p>Thanks</p>\n", "question_id": 756470, "owner": {"profile_image": "https://www.gravatar.com/avatar/504174de74647d38b7621a1ff33274b3?s=128&d=identicon&r=PG&f=1", "reputation": 53, "display_name": "user140152", "user_id": 140152, "link": "http://math.stackexchange.com/users/140152/user140152", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756470/groups-sets-notation-question", "tags": ["abstract-algebra", "group-theory", "notation"], "creation_date": 1397661621}, {"title": "How can I evaluate this sum of product?", "body_markdown": "\r\n\r\n$$\\sum_{n=1}^{\\infty}\\prod_{k=1}^n\\left(\\frac{\\sqrt{k+1}-1}{\\sqrt{k}}\\right)$$\r\n\r\nI have no idea . Thank you.", "body": "<p>$$\\sum_{n=1}^{\\infty}\\prod_{k=1}^n\\left(\\frac{\\sqrt{k+1}-1}{\\sqrt{k}}\\right)$$</p>\n\n<p>I have no idea . Thank you.</p>\n", "question_id": 756469, "owner": {"profile_image": "https://www.gravatar.com/avatar/eee65a2c6f9b8269c61b0c4f81eb550c?s=128&d=identicon&r=PG", "reputation": 101, "display_name": "kong", "user_id": 120562, "link": "http://math.stackexchange.com/users/120562/kong", "user_type": "unregistered"}, "link": "http://math.stackexchange.com/questions/756469/how-can-i-evaluate-this-sum-of-product", "tags": ["algebra-precalculus"], "creation_date": 1397661558}, {"title": "Ito's Lemma for Integral", "body_markdown": "Let $S$ follow GBM with $dS=(r-q)S\\,dt+\\sigma S\\,dW$ where $W$ is a standard Brownian motion.<br/>\r\nDefine $I_t=\\int_0^t qe^{r(t-u)}S_u \\,du$, then how can I determine $dI_t$? The answer should be $dI_t=(rI_t+qS_t)\\,dt$. (Oh and this is not homework, I was just reading some papers on stock loan pricing and ran into this formula, whose proof was not given.)\r\n\r\nIf I'm correct, then $\\frac{\\partial I_t}{\\partial t}=qS_u e^{r(t-t)}=qS_u.$ I just don't know how to proceed with $\\frac{\\partial I_t}{\\partial S}$ and $\\frac{\\partial^2 I_t}{\\partial S^2}$. Any help is greatly appreciated, thank you!", "body": "<p>Let $S$ follow GBM with $dS=(r-q)S\\,dt+\\sigma S\\,dW$ where $W$ is a standard Brownian motion.<br/>\nDefine $I_t=\\int_0^t qe^{r(t-u)}S_u \\,du$, then how can I determine $dI_t$? The answer should be $dI_t=(rI_t+qS_t)\\,dt$. (Oh and this is not homework, I was just reading some papers on stock loan pricing and ran into this formula, whose proof was not given.)</p>\n\n<p>If I'm correct, then $\\frac{\\partial I_t}{\\partial t}=qS_u e^{r(t-t)}=qS_u.$ I just don't know how to proceed with $\\frac{\\partial I_t}{\\partial S}$ and $\\frac{\\partial^2 I_t}{\\partial S^2}$. Any help is greatly appreciated, thank you!</p>\n", "question_id": 756465, "owner": {"profile_image": "https://www.gravatar.com/avatar/dc153c4dfe6d2f77f67b1301adb89c0b?s=128&d=identicon&r=PG", "reputation": 419, "display_name": "drawar", "user_id": 49015, "link": "http://math.stackexchange.com/users/49015/drawar", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756465/itos-lemma-for-integral", "tags": ["stochastic-calculus"], "creation_date": 1397661378}, {"title": "Why optimization problems cannot be solved by simple derivative?", "body_markdown": "> Let $f(\\cdot)$ be a linear function. \r\n\r\n>$f:\\mathbb{R}^n\\rightarrow\\mathbb{R}$\r\n\r\n>$\\;\\quad\\;\\mathbf{x}\\;\\rightarrow f(\\mathbf{x})$.\r\n\r\n\r\n>Let $\\mathbf{A}$ be a matrix in $\\mathbb{R}^{m\\times n}$.\r\n\r\n>Let $\\mathbf{b}=\\left(b_1, b_2, \\dotsc, b_m\\right)^\\mathrm{T}$ be a vector in $\\mathbb{R}^{m}$.\r\n\r\n>The following problem $(P)$:\r\n\r\n> $\\mathrm{maximize}\\;\\;\\; f(\\mathbf{x})$\r\n\r\n> subject to: $\\;\\;\\mathbf{A}\\mathbf{x}\\leq \\mathbf{b}$.\r\n\r\n> is a $0$-$1$ binary programming problem or a linear programming problem depending on $\\mathbf{x}$ if $\\mathbf{x}\\in\\{0, 1\\}^n$ or $\\mathbf{x}\\in\\mathbb{R}^n$.\r\n\r\n> Problem $(P)$ is also hard to solve or easy to solve depending on $\\mathbf{x}$ if $\\mathbf{x}\\in\\{0, 1\\}^n$ or $\\mathbf{x}\\in\\mathbb{R}^n$.\r\n\r\nMy question is the following: It is a very basic question. May be wrong one but I really need a clarification.\r\n\r\n - If $\\mathbf{x}\\in\\mathbb{R}^n$, why we do not simply calculate the derivative of $f(\\cdot)$ at point $\\mathbf{x}$ and we see its minimum and maximum and verify the constraints and solve $(P)$?\r\n - If $\\mathbf{x}\\in\\{0, 1\\}^n$, why we cannot do the same thing?\r\n - Is the derivative make things complicated?\r\n - Does the derivative exists if $\\mathbf{x}\\in\\{0, 1\\}^n$?\r\n\r\nPlease any explanation. And thank you very much for your help.\r\n\r\nEDIT: The function $f(\\cdot)$ could be nonlinear and has a non-constant derivative. This AFAIK makes the problem even more complex. The problem becomes $0$-$1$ nonlinear binary programming or nonlinear programming problem. Still the derivative cannot solve $(P)$?", "body": "<blockquote>\n <p>Let $f(\\cdot)$ be a linear function. </p>\n \n <p>$f:\\mathbb{R}^n\\rightarrow\\mathbb{R}$</p>\n \n <p>$\\;\\quad\\;\\mathbf{x}\\;\\rightarrow f(\\mathbf{x})$.</p>\n \n <p>Let $\\mathbf{A}$ be a matrix in $\\mathbb{R}^{m\\times n}$.</p>\n \n <p>Let $\\mathbf{b}=\\left(b_1, b_2, \\dotsc, b_m\\right)^\\mathrm{T}$ be a vector in $\\mathbb{R}^{m}$.</p>\n \n <p>The following problem $(P)$:</p>\n \n <p>$\\mathrm{maximize}\\;\\;\\; f(\\mathbf{x})$</p>\n \n <p>subject to: $\\;\\;\\mathbf{A}\\mathbf{x}\\leq \\mathbf{b}$.</p>\n \n <p>is a $0$-$1$ binary programming problem or a linear programming problem depending on $\\mathbf{x}$ if $\\mathbf{x}\\in\\{0, 1\\}^n$ or $\\mathbf{x}\\in\\mathbb{R}^n$.</p>\n \n <p>Problem $(P)$ is also hard to solve or easy to solve depending on $\\mathbf{x}$ if $\\mathbf{x}\\in\\{0, 1\\}^n$ or $\\mathbf{x}\\in\\mathbb{R}^n$.</p>\n</blockquote>\n\n<p>My question is the following: It is a very basic question. May be wrong one but I really need a clarification.</p>\n\n<ul>\n<li>If $\\mathbf{x}\\in\\mathbb{R}^n$, why we do not simply calculate the derivative of $f(\\cdot)$ at point $\\mathbf{x}$ and we see its minimum and maximum and verify the constraints and solve $(P)$?</li>\n<li>If $\\mathbf{x}\\in\\{0, 1\\}^n$, why we cannot do the same thing?</li>\n<li>Is the derivative make things complicated?</li>\n<li>Does the derivative exists if $\\mathbf{x}\\in\\{0, 1\\}^n$?</li>\n</ul>\n\n<p>Please any explanation. And thank you very much for your help.</p>\n\n<p>EDIT: The function $f(\\cdot)$ could be nonlinear and has a non-constant derivative. This AFAIK makes the problem even more complex. The problem becomes $0$-$1$ nonlinear binary programming or nonlinear programming problem. Still the derivative cannot solve $(P)$?</p>\n", "question_id": 756464, "owner": {"profile_image": "https://www.gravatar.com/avatar/34768bdeb68ec3341a603c90c00f5145?s=128&d=identicon&r=PG&f=1", "reputation": 172, "display_name": "zighalo", "user_id": 139954, "link": "http://math.stackexchange.com/users/139954/zighalo", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756464/why-optimization-problems-cannot-be-solved-by-simple-derivative", "tags": ["reference-request", "education", "linear-programming", "integer-programming"], "creation_date": 1397661346}, {"title": "Error solving "stars and bars" type problem", "body_markdown": "I have what I thought is a fairly simple problem: Count non-negative integer solutions to the equation\r\n\r\n$$x_1 + x_2 + x_3 + x_4 + x_5 = 23$$\r\n\r\nsuch that $0 \\leq x_1 \\leq 9$.\r\n\r\nNot too hard, right? Simply ignore the upper-bound, count the\r\n$$\\begin{pmatrix}23 + (5-1) \\\\ (5-1)\\end{pmatrix} = \\begin{pmatrix}27 \\\\ 4\\end{pmatrix} = 17550$$ solutions. Subtract from this all (non-negative integer) solutions to the equation\r\n$$y_1 + 10 + x_2 + x_3 + x_4 + x_5 = 23,$$\r\nand there are $\\begin{pmatrix}17 \\\\ 4\\end{pmatrix} = 2380$ of these "bad" solutions we shouldn't have counted earlier, but did. Thus we find $17550 - 2380 = 15170$ solutions.\r\n\r\nSince this is a prohibitively large number of solutions to check by hand, I wrote a simple Python program to verify whether this answer is correct. It does indeed say there are $17550$ solutions without upper bounds, and $2380$ solutions to the equation for counting "bad" solutions.\r\n\r\nHowever, when I ask it throw away all solutions to the non-upper-bounded problem for which $x_1 \\geq 10$, it tells me it's found $15730$ solutions.\r\n\r\nMy question is: do I not understand the combinatorial calculation so that there are not actually $\\begin{pmatrix}27\\\\4\\end{pmatrix}-\\begin{pmatrix}17\\\\4\\end{pmatrix}$ solutions, or have I made some kind of programming mistake? Of course, both are also possible.\r\n", "body": "<p>I have what I thought is a fairly simple problem: Count non-negative integer solutions to the equation</p>\n\n<p>$$x_1 + x_2 + x_3 + x_4 + x_5 = 23$$</p>\n\n<p>such that $0 \\leq x_1 \\leq 9$.</p>\n\n<p>Not too hard, right? Simply ignore the upper-bound, count the\n$$\\begin{pmatrix}23 + (5-1) \\\\ (5-1)\\end{pmatrix} = \\begin{pmatrix}27 \\\\ 4\\end{pmatrix} = 17550$$ solutions. Subtract from this all (non-negative integer) solutions to the equation\n$$y_1 + 10 + x_2 + x_3 + x_4 + x_5 = 23,$$\nand there are $\\begin{pmatrix}17 \\\\ 4\\end{pmatrix} = 2380$ of these \"bad\" solutions we shouldn't have counted earlier, but did. Thus we find $17550 - 2380 = 15170$ solutions.</p>\n\n<p>Since this is a prohibitively large number of solutions to check by hand, I wrote a simple Python program to verify whether this answer is correct. It does indeed say there are $17550$ solutions without upper bounds, and $2380$ solutions to the equation for counting \"bad\" solutions.</p>\n\n<p>However, when I ask it throw away all solutions to the non-upper-bounded problem for which $x_1 \\geq 10$, it tells me it's found $15730$ solutions.</p>\n\n<p>My question is: do I not understand the combinatorial calculation so that there are not actually $\\begin{pmatrix}27\\\\4\\end{pmatrix}-\\begin{pmatrix}17\\\\4\\end{pmatrix}$ solutions, or have I made some kind of programming mistake? Of course, both are also possible.</p>\n", "question_id": 756462, "owner": {"profile_image": "https://www.gravatar.com/avatar/42426566d6affe24a3a9d8e531a654b4?s=128&d=identicon&r=PG&f=1", "reputation": 377, "display_name": "pjs36", "user_id": 120540, "link": "http://math.stackexchange.com/users/120540/pjs36", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756462/error-solving-stars-and-bars-type-problem", "tags": ["combinatorics"], "creation_date": 1397661155}, {"title": "How to integrate $\\int \\frac{dy}{\\sqrt{4y+\\frac{1}{4y^2}+2C_1}}$?", "body_markdown": "How do I integrate $\\int \\frac{dy}{\\sqrt{4y+\\frac{1}{4y^2}+2C_1}}$, where $C_1$ is an arbitrary constant?\r\nIs this integral really complex (hard to integrate)?\r\n\r\n\r\n**EDIT:**\r\nThis comes from DE:\r\n$dy/dx = \\sqrt{4y+\\frac{1}{4y^2}+2C_1}$\r\n\r\nMaybe there is another way to get the solution in terms of $x$ or $y$?", "body": "<p>How do I integrate $\\int \\frac{dy}{\\sqrt{4y+\\frac{1}{4y^2}+2C_1}}$, where $C_1$ is an arbitrary constant?\nIs this integral really complex (hard to integrate)?</p>\n\n<p><strong>EDIT:</strong>\nThis comes from DE:\n$dy/dx = \\sqrt{4y+\\frac{1}{4y^2}+2C_1}$</p>\n\n<p>Maybe there is another way to get the solution in terms of $x$ or $y$?</p>\n", "question_id": 756461, "owner": {"profile_image": "https://www.gravatar.com/avatar/8e8e62efa42390b036948ed1ce88a58a?s=128&d=identicon&r=PG", "reputation": 63, "display_name": "Kristians Kuhta", "user_id": 141312, "link": "http://math.stackexchange.com/users/141312/kristians-kuhta", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756461/how-to-integrate-int-fracdy-sqrt4y-frac14y22c-1", "tags": ["integration", "definite-integrals"], "creation_date": 1397661131}, {"title": "Scalar Curvature of a metric on the hemisphere, from a paper on the Min-Oo Conjecture", "body_markdown": "I'm reading a paper on the Min-Oo Conjecture (http://arxiv.org/abs/1004.3088), and I'm stuck on the following step in a proposition:\r\n\r\nGiven a metric $g_0(t)$ on the upper hemisphere $\\mathbb{S}^n_+$, and the standard metric $\\bar{g}$ on the sphere $\\mathbb{S}^n$ restricted to the hemisphere, we define another metric $g(t)$ on the upper hemisphere by:\r\n\r\n$g(t) = g_0(t) + \\frac{1}{2(n-1)}t^2 u \\bar{g}$\r\n\r\n\r\nIn the paper they say this implies:\r\n\r\n$R_{g(t)}=R_{g_0(t)} - \\frac{1}{2} t^2 (\\Delta u + nu) + O(t^3)$\r\n\r\nI'm not sure if I should break down and calculate like mad, or if there is a better way to see this. The conditions we have on $u$ are simply\r\n\r\n$u|_{\\partial \\mathbb{S}^n_+}= 0$\r\n\r\nI appreciate all help. Cheers!", "body": "<p>I'm reading a paper on the Min-Oo Conjecture (<a href=\"http://arxiv.org/abs/1004.3088\" rel=\"nofollow\">http://arxiv.org/abs/1004.3088</a>), and I'm stuck on the following step in a proposition:</p>\n\n<p>Given a metric $g_0(t)$ on the upper hemisphere $\\mathbb{S}^n_+$, and the standard metric $\\bar{g}$ on the sphere $\\mathbb{S}^n$ restricted to the hemisphere, we define another metric $g(t)$ on the upper hemisphere by:</p>\n\n<p>$g(t) = g_0(t) + \\frac{1}{2(n-1)}t^2 u \\bar{g}$</p>\n\n<p>In the paper they say this implies:</p>\n\n<p>$R_{g(t)}=R_{g_0(t)} - \\frac{1}{2} t^2 (\\Delta u + nu) + O(t^3)$</p>\n\n<p>I'm not sure if I should break down and calculate like mad, or if there is a better way to see this. The conditions we have on $u$ are simply</p>\n\n<p>$u|_{\\partial \\mathbb{S}^n_+}= 0$</p>\n\n<p>I appreciate all help. Cheers!</p>\n", "question_id": 756457, "owner": {"profile_image": "https://www.gravatar.com/avatar/c37b5f1993014e7aeebed96769a948a1?s=128&d=identicon&r=PG", "reputation": 1, "display_name": "Michael Pinkard", "user_id": 74400, "link": "http://math.stackexchange.com/users/74400/michael-pinkard", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756457/scalar-curvature-of-a-metric-on-the-hemisphere-from-a-paper-on-the-min-oo-conje", "tags": ["geometry", "differential-geometry", "riemannian-geometry", "curvature"], "creation_date": 1397660978}, {"title": "Finding a basis for set of vectors (columns/rows)", "body_markdown": "I was wondering if someone could help me with the following. I have to find a basis for the subset of $\\mathbb{R}^4$ spanned by $(1,2,0,3)^T$, $(3,5,1,7)^T$, $ (1,1,1,1)^T$ and $(0,1,-1,2)^T$. Now I know how to find this, but I am a bit confused whether I should let these vectors be the columns of a matrix which I then row reduce, or the rows of such a matrix. I always thought that since these are column vectors, I should let them be the columns (and if they were row vectors, I should let them be the rows). But I now got confused after looking at \r\n\r\nhttp://crazyproject.wordpress.com/2011/07/14/find-a-basis-for-the-span-of-a-given-set-of-vectors/\r\n\r\nwhere the given vectors are row vectors, which are then taken as the columns in a matrix ... So I was hoping someone could help me get this clear when to put the vectors as rows and when to put them as columns in a matrix when finding a basis. \r\n", "body": "<p>I was wondering if someone could help me with the following. I have to find a basis for the subset of $\\mathbb{R}^4$ spanned by $(1,2,0,3)^T$, $(3,5,1,7)^T$, $ (1,1,1,1)^T$ and $(0,1,-1,2)^T$. Now I know how to find this, but I am a bit confused whether I should let these vectors be the columns of a matrix which I then row reduce, or the rows of such a matrix. I always thought that since these are column vectors, I should let them be the columns (and if they were row vectors, I should let them be the rows). But I now got confused after looking at </p>\n\n<p><a href=\"http://crazyproject.wordpress.com/2011/07/14/find-a-basis-for-the-span-of-a-given-set-of-vectors/\" rel=\"nofollow\">http://crazyproject.wordpress.com/2011/07/14/find-a-basis-for-the-span-of-a-given-set-of-vectors/</a></p>\n\n<p>where the given vectors are row vectors, which are then taken as the columns in a matrix ... So I was hoping someone could help me get this clear when to put the vectors as rows and when to put them as columns in a matrix when finding a basis. </p>\n", "question_id": 756456, "owner": {"profile_image": "https://www.gravatar.com/avatar/780b86110393bb5fc2538bd5c496471b?s=128&d=identicon&r=PG&f=1", "reputation": 59, "display_name": "user133993", "user_id": 133993, "link": "http://math.stackexchange.com/users/133993/user133993", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756456/finding-a-basis-for-set-of-vectors-columns-rows", "tags": ["linear-algebra"], "creation_date": 1397660927}, {"title": "A basic question on absolute continuous measures", "body_markdown": "Show that $\\nu \\equiv \\mu$ (i.e. they are absolutely continuous with respect to each other) are $\\sigma$-finite measures on $(\\Omega, F)$. \r\n\r\nconsider the set $\\{\\omega: \\frac{d\\mu}{d\\nu} =0 \\}$. Is this correct that for that set $\\frac {d\\nu}{d\\mu} =0$ ?", "body": "<p>Show that $\\nu \\equiv \\mu$ (i.e. they are absolutely continuous with respect to each other) are $\\sigma$-finite measures on $(\\Omega, F)$. </p>\n\n<p>consider the set $\\{\\omega: \\frac{d\\mu}{d\\nu} =0 \\}$. Is this correct that for that set $\\frac {d\\nu}{d\\mu} =0$ ?</p>\n", "question_id": 756455, "owner": {"profile_image": "https://www.gravatar.com/avatar/0712694bd9f09f9cda961500af2830d7?s=128&d=identicon&r=PG", "reputation": 886, "display_name": "aaaaaa", "user_id": 14465, "link": "http://math.stackexchange.com/users/14465/aaaaaa", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756455/a-basic-question-on-absolute-continuous-measures", "tags": ["measure-theory"], "creation_date": 1397660903}, {"title": "Solving a recurrence realtion using forward substitution.", "body_markdown": "T(n) = 7T(n/7) for n>1, n a power of 7\r\nthat's my question and so far I think it's T(7) = 7T(7/7) = 7T(1) = 7\r\n then T(49) = 49T(49/7) = 49T(7) = ?\r\nWhat is the answer to that?", "body": "<p>T(n) = 7T(n/7) for n>1, n a power of 7\nthat's my question and so far I think it's T(7) = 7T(7/7) = 7T(1) = 7\n then T(49) = 49T(49/7) = 49T(7) = ?\nWhat is the answer to that?</p>\n", "question_id": 756451, "owner": {"profile_image": "https://www.gravatar.com/avatar/849e35d29e6cadce2cdf5b09a030240d?s=128&d=identicon&r=PG", "reputation": 1, "display_name": "John", "user_id": 143596, "link": "http://math.stackexchange.com/users/143596/john", "user_type": "unregistered"}, "link": "http://math.stackexchange.com/questions/756451/solving-a-recurrence-realtion-using-forward-substitution", "tags": ["homework"], "creation_date": 1397660774}, {"title": "Schur-Weyl duality from Double Commutant Theory", "body_markdown": "Let $V$ be a finite dim complex vector space. Then $V^{\\otimes n}$ carries an action by $S_n$ by permuting factors \r\n\r\n$\\sigma(\\pi)(v_1\\otimes...\\otimes v_n)=v_{\\pi^{-1}(1)}\\otimes...\\otimes v_{\\pi^{-1}(n)}$\r\n\r\nand an action of GL(V) using the diagonal action of its defining representation $g\\in GL(V)$\r\n\r\n$\\rho(g)(v_1\\otimes...\\otimes v_n)=gv_1\\otimes...\\otimes gv_n$\r\n\r\nNow we have that if $\\mathcal{A}=\\sigma(\\mathbb{C}[S_n])$ then its commutant is $\\mathcal{A'}=\\rho(\\mathbb{C}[GL(V)])$. Using results from the double commutant theory we get a decomposition of $V^{\\otimes n}$ as an $\\mathcal{A}\\times\\mathcal{A'}-module$.\r\n \r\n1) Given that we know the above decomposition how can we argue that we also get a decomposition as an $S_n\\times GL(V)-module$?\r\n\r\n2) Further, how do we get the decomposition as an $S_n\\times U(V)-module$?\r\n\r\nThe fact that we get all irreducible reps of a group from its group algebra probably plays an important part for 1) but it's not completely clear to me how to go from here to the images of group algebras under the above reps.\r\n\r\nMany thanks.\r\n", "body": "<p>Let $V$ be a finite dim complex vector space. Then $V^{\\otimes n}$ carries an action by $S_n$ by permuting factors </p>\n\n<p>$\\sigma(\\pi)(v_1\\otimes...\\otimes v_n)=v_{\\pi^{-1}(1)}\\otimes...\\otimes v_{\\pi^{-1}(n)}$</p>\n\n<p>and an action of GL(V) using the diagonal action of its defining representation $g\\in GL(V)$</p>\n\n<p>$\\rho(g)(v_1\\otimes...\\otimes v_n)=gv_1\\otimes...\\otimes gv_n$</p>\n\n<p>Now we have that if $\\mathcal{A}=\\sigma(\\mathbb{C}[S_n])$ then its commutant is $\\mathcal{A'}=\\rho(\\mathbb{C}[GL(V)])$. Using results from the double commutant theory we get a decomposition of $V^{\\otimes n}$ as an $\\mathcal{A}\\times\\mathcal{A'}-module$.</p>\n\n<p>1) Given that we know the above decomposition how can we argue that we also get a decomposition as an $S_n\\times GL(V)-module$?</p>\n\n<p>2) Further, how do we get the decomposition as an $S_n\\times U(V)-module$?</p>\n\n<p>The fact that we get all irreducible reps of a group from its group algebra probably plays an important part for 1) but it's not completely clear to me how to go from here to the images of group algebras under the above reps.</p>\n\n<p>Many thanks.</p>\n", "question_id": 756450, "owner": {"profile_image": "https://www.gravatar.com/avatar/c982684be9f4a8c3ceda0a67dbe4a65c?s=128&d=identicon&r=PG", "reputation": 11, "display_name": "Chris", "user_id": 53410, "link": "http://math.stackexchange.com/users/53410/chris", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756450/schur-weyl-duality-from-double-commutant-theory", "tags": ["abstract-algebra", "group-theory", "representation-theory"], "creation_date": 1397660735}, {"title": "Linear operators and matrices", "body_markdown": "Let $B=\\{(1,0),(1,1)\\}$ be a basis of $\\mathbb R^2$. Given the following matrix representation of an linear operator $T$ over the basis $B$:\r\n\r\n$$[T]_B=\r\n \\begin{pmatrix}\r\n -1 & 1 \\\\\r\n 2 & 1 \\\\\r\n \\end{pmatrix}\r\n$$\r\n\r\nHow can we find the operator $T(x,y)$?\r\n\r\nI don't know why my strategy doesn't work:\r\n\r\n$T(x,y)=xT(1,0)+yT(0,1)=x(-1,2)+y(1,1)=(-x+y,2x+y)$.\r\n\r\nI would like to know what's the _standard_ method to find this operator.\r\n\r\nThanks in advance.", "body": "<p>Let $B=\\{(1,0),(1,1)\\}$ be a basis of $\\mathbb R^2$. Given the following matrix representation of an linear operator $T$ over the basis $B$:</p>\n\n<p>$$[T]_B=\n \\begin{pmatrix}\n -1 & 1 \\\\\n 2 & 1 \\\\\n \\end{pmatrix}\n$$</p>\n\n<p>How can we find the operator $T(x,y)$?</p>\n\n<p>I don't know why my strategy doesn't work:</p>\n\n<p>$T(x,y)=xT(1,0)+yT(0,1)=x(-1,2)+y(1,1)=(-x+y,2x+y)$.</p>\n\n<p>I would like to know what's the <em>standard</em> method to find this operator.</p>\n\n<p>Thanks in advance.</p>\n", "question_id": 756448, "owner": {"profile_image": "http://www.gravatar.com/avatar/19a6d761f5102524e2da492fc7a1ce86?s=128&d=identicon&r=PG", "reputation": 3916, "display_name": "user42912", "user_id": 42912, "link": "http://math.stackexchange.com/users/42912/user42912", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756448/linear-operators-and-matrices", "tags": ["linear-algebra"], "creation_date": 1397660590}, {"title": ""Reverse" distribution-tails", "body_markdown": "Chernoff, Markov and Chebyhev all give some upper bound for tail probabilities, e.g. Chebyshev gives us\r\n\r\n$Pr[|X-E[X]| \\geq t] \\leq \\frac{Var[X]}{t^2}$. \r\n\r\nThis is quite helpful, but what if I would like to know $f(E[X], Var[X])$ such that\r\n\r\n$Pr[|X-E[X]| \\geq t] \\geq f(E[X], Var[X])$?\r\n\r\n**More context, if needed:**\r\n$X$ is a random variable with expected value $E[X]$ and variance $Var[X]$ as well as Support $\\mathcal{X}$. We define\r\n$$I:=[E[X]-t, E[X]+t]$$\r\n$$I^c:=\\mathcal{X} - I$$\r\nI want to calculate the following expectation $E[g(X)]=\\sum_{x \\in \\mathcal{X}} Pr[X=x] g(x)$. Now I'd like to put summands together, e.g. \r\n$$E[g(X)]=\\sum_{x \\in I}Pr[X=x] \\cdot g(x)+\\sum_{x \\in I^c}Pr[X=x] \\cdot g(x) \\leq Pr[|X-E[X]| \\leq t] \\cdot max_{x\\in I } g(x) + Pr[|X-E[X]| \\geq t] \\cdot max_{x\\in I^c }g(x) \\leq (1-f(E[X],Var[X]))\\cdot max_{x\\in I } g(x) + \\frac{Var[X]}{t^2}\\cdot max_{x\\in I^c }g(x)$$\r\n\r\nI'm not sure that my calculation above makes sense, so if you could tell me whether it is correct or not, and alternatively give me a way to find an upper bound for $E[g(X)]$ based on $E[X]$ and $Var[X]$, that would be great. \r\n\r\nIt might be important that in general $g$ is NOT a linear function and is neither convex nor concave. ", "body": "<p>Chernoff, Markov and Chebyhev all give some upper bound for tail probabilities, e.g. Chebyshev gives us</p>\n\n<p>$Pr[|X-E[X]| \\geq t] \\leq \\frac{Var[X]}{t^2}$. </p>\n\n<p>This is quite helpful, but what if I would like to know $f(E[X], Var[X])$ such that</p>\n\n<p>$Pr[|X-E[X]| \\geq t] \\geq f(E[X], Var[X])$?</p>\n\n<p><strong>More context, if needed:</strong>\n$X$ is a random variable with expected value $E[X]$ and variance $Var[X]$ as well as Support $\\mathcal{X}$. We define\n$$I:=[E[X]-t, E[X]+t]$$\n$$I^c:=\\mathcal{X} - I$$\nI want to calculate the following expectation $E[g(X)]=\\sum_{x \\in \\mathcal{X}} Pr[X=x] g(x)$. Now I'd like to put summands together, e.g. \n$$E[g(X)]=\\sum_{x \\in I}Pr[X=x] \\cdot g(x)+\\sum_{x \\in I^c}Pr[X=x] \\cdot g(x) \\leq Pr[|X-E[X]| \\leq t] \\cdot max_{x\\in I } g(x) + Pr[|X-E[X]| \\geq t] \\cdot max_{x\\in I^c }g(x) \\leq (1-f(E[X],Var[X]))\\cdot max_{x\\in I } g(x) + \\frac{Var[X]}{t^2}\\cdot max_{x\\in I^c }g(x)$$</p>\n\n<p>I'm not sure that my calculation above makes sense, so if you could tell me whether it is correct or not, and alternatively give me a way to find an upper bound for $E[g(X)]$ based on $E[X]$ and $Var[X]$, that would be great. </p>\n\n<p>It might be important that in general $g$ is NOT a linear function and is neither convex nor concave. </p>\n", "question_id": 756447, "owner": {"profile_image": "https://www.gravatar.com/avatar/32eb7e101cb0afae6d3553f9a46f71d9?s=128&d=identicon&r=PG&f=1", "reputation": 27, "display_name": "user136457", "user_id": 136457, "link": "http://math.stackexchange.com/users/136457/user136457", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756447/reverse-distribution-tails", "tags": ["probability", "expectation", "standard-deviation", "distribution-tails"], "creation_date": 1397660576}, {"title": "Navigational Help combinations", "body_markdown": "You take exactly 10 steps. You can move in one of three directions (North West East) with no desired target point (just 10 steps). You cannot follow a move East with a move West or vice versa. How many possible combinations of paths are there. I know it's 3^10 but am unsure as to how to limit the counter from the repetition of only two particular instances ", "body": "<p>You take exactly 10 steps. You can move in one of three directions (North West East) with no desired target point (just 10 steps). You cannot follow a move East with a move West or vice versa. How many possible combinations of paths are there. I know it's 3^10 but am unsure as to how to limit the counter from the repetition of only two particular instances </p>\n", "question_id": 756446, "owner": {"profile_image": "https://www.gravatar.com/avatar/18a0dd192a41ac9e8ac00c93f43f6b86?s=128&d=identicon&r=PG", "reputation": 21, "display_name": "DTR9999", "user_id": 143589, "link": "http://math.stackexchange.com/users/143589/dtr9999", "user_type": "unregistered"}, "link": "http://math.stackexchange.com/questions/756446/navigational-help-combinations", "tags": ["combinations"], "creation_date": 1397660576}, {"title": "Solving a system of modular equatios", "body_markdown": "Describe all integers that satisfy the four following equations. I imagine there will only be one, a I think I can attack this problem using a method I read about on Wikipedia called the chinese remainder theorem, is this a good place to start, or is there a trivial method for solving this?\r\n\r\n$$\\begin{align*}\r\n\\\\3{}x + 4 \u2261 5 \\pmod {7}.\r\n\\\\4x {}+ 5 \u2261 6 \\pmod {9}.\r\n\\\\6x + 1 \u2261{} 7 \\pmod {11}.\\\\7x + 2{} \u2261 8 \\pmod {1{}3}.\r\n\\end{align*}$$\r\n\r\nI will edit in my attempt of the chinese remainder theorem after I get confirmation that it is optimal(if it is), thank you all!", "body": "<p>Describe all integers that satisfy the four following equations. I imagine there will only be one, a I think I can attack this problem using a method I read about on Wikipedia called the chinese remainder theorem, is this a good place to start, or is there a trivial method for solving this?</p>\n\n<p>$$\\begin{align*}\n\\\\3{}x + 4 \u2261 5 \\pmod {7}.\n\\\\4x {}+ 5 \u2261 6 \\pmod {9}.\n\\\\6x + 1 \u2261{} 7 \\pmod {11}.\\\\7x + 2{} \u2261 8 \\pmod {1{}3}.\n\\end{align*}$$</p>\n\n<p>I will edit in my attempt of the chinese remainder theorem after I get confirmation that it is optimal(if it is), thank you all!</p>\n", "question_id": 756440, "owner": {"profile_image": "https://www.gravatar.com/avatar/4ac35e32e85f903b690ceb0f681ea18d?s=128&d=identicon&r=PG&f=1", "reputation": 65, "display_name": "Katie", "user_id": 137849, "link": "http://math.stackexchange.com/users/137849/katie", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756440/solving-a-system-of-modular-equatios", "tags": ["abstract-algebra", "modular-arithmetic"], "creation_date": 1397660400}, {"title": "Does this Stochastic Differential Equation have a name?", "body_markdown": "I came across this SDE and since I am not an expert I am wondering if this SDE is known to have an closed form solution for first passage times.\r\n\r\nThe SDE is\r\n\r\n$$dY_t=(a+be^{ct}) \\, dt+\\sigma \\, dB_t$$\r\n\r\nHow does one go about finding an explicit distribution for first passage times in this case?", "body": "<p>I came across this SDE and since I am not an expert I am wondering if this SDE is known to have an closed form solution for first passage times.</p>\n\n<p>The SDE is</p>\n\n<p>$$dY_t=(a+be^{ct}) \\, dt+\\sigma \\, dB_t$$</p>\n\n<p>How does one go about finding an explicit distribution for first passage times in this case?</p>\n", "question_id": 756439, "owner": {"profile_image": "https://www.gravatar.com/avatar/845a894c8fad0a184c0e4a0254b826c7?s=128&d=identicon&r=PG", "reputation": 183, "display_name": "Nuno Calaim", "user_id": 86490, "link": "http://math.stackexchange.com/users/86490/nuno-calaim", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756439/does-this-stochastic-differential-equation-have-a-name", "tags": ["stochastic-processes", "stochastic-calculus", "stochastic-integrals"], "creation_date": 1397660315}, {"title": "What is $\\mathbb R^\\omega$?", "body_markdown": "I have seen $\\mathbb R^\\omega$ mentioned in my topology texts but cannot find where $\\omega$ is defined. Could someone please tell me what it means in comparison to $\\mathbb R^n$?", "body": "<p>I have seen $\\mathbb R^\\omega$ mentioned in my topology texts but cannot find where $\\omega$ is defined. Could someone please tell me what it means in comparison to $\\mathbb R^n$?</p>\n", "question_id": 756438, "owner": {"profile_image": "https://www.gravatar.com/avatar/4e3fc37e27d7bfbc0c19b632a6b9a5dc?s=128&d=identicon&r=PG", "reputation": 71, "display_name": "MCTi", "user_id": 50038, "link": "http://math.stackexchange.com/users/50038/mcti", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756438/what-is-mathbb-r-omega", "tags": ["general-topology"], "creation_date": 1397660234}, {"title": "Confidence Interval Estimate", "body_markdown": "Assume a simple random sample is taken, the conditions for a binomial distribution are satisfied, and the sample proportions can be approximated by a normal distribution. From a sample of $200$ fish in a particular river, the number of breeders is $20$. Use a $95\\%$ confidence interval to construct a confidence interval estimate of the population proportion.\r\n\r\nA. $(.041,\\ .159)$ \r\nB. $(.058,\\ .142)$ \r\nC. $(.065,\\ .135)$ \r\nD. $(.073,\\ .121)$ ", "body": "<p>Assume a simple random sample is taken, the conditions for a binomial distribution are satisfied, and the sample proportions can be approximated by a normal distribution. From a sample of $200$ fish in a particular river, the number of breeders is $20$. Use a $95\\%$ confidence interval to construct a confidence interval estimate of the population proportion.</p>\n\n<p>A. $(.041,\\ .159)$<br>\nB. $(.058,\\ .142)$<br>\nC. $(.065,\\ .135)$<br>\nD. $(.073,\\ .121)$ </p>\n", "question_id": 756437, "owner": {"profile_image": "https://www.gravatar.com/avatar/7679a6ef3497f3b2fe0697906378fdee?s=128&d=identicon&r=PG&f=1", "reputation": 1, "display_name": "Nicole", "user_id": 143594, "link": "http://math.stackexchange.com/users/143594/nicole", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756437/confidence-interval-estimate", "tags": ["statistics", "normal-distribution"], "creation_date": 1397660211}, {"title": "numerial solution to fredholm integral equation", "body_markdown": "Consider the integral equation:\r\n$$\r\ny(x)\r\n=1+\\int_0^cK(x,t)\\,y(t)\\,dt,\r\n$$\r\nwhere $x\\ge0$ and\r\n$$\r\nK(x,t)\r\n=\r\n\\frac{\\partial}{\\partial t}\\Phi\\left(\\frac{1}{\\kappa}\\log\\left(\\frac{t}{1+x}\\right)-\\frac{\\kappa}{2}\\right),\r\n$$\r\nwith $\\Phi(z)$ denoting the cdf of the standard Gaussian distribution, $\\kappa>0$.\r\n\r\nThe equation does not allow for an exact closed-form analytical solution. To solve it numerically one can, e.g., employ the collocation method with the functional basis chosen as $\\psi_j(x)=\\boldsymbol{1}_{x\\in[x_j,x_{j+1})}$. This is convenient because the kernel is a derivative, and so the integral operator will end up becoming a matrix that is nicely computable. It can be shown that the rate of convergence in this case is actually quadratic (even though the interpolation basis is formed of polynomials of degree 0). That's the theory. In practice the second order rate of convergence does not take effect until the partition size is of order $10^4$, which is rather high. I am thinking of a different approach now. For instance, approximating the kernel with a degenerate kernel seems a plausible idea to me. The question is are there known good approximations to log-normal distributions (the kernel is effectively a log-normal distribution) that would allow to achieve variable separation?\r\n\r\nOr perhaps there is a better approach to treat the above equation? Any help would be appreciated.\r\n\r\nUpdate: The MATLAB package developed by K. Atkinson doesn't quite work here, as it is based on the Nystrom method, and that in turn effectively attempts to linearize the kernel itself which is not a good idea in this case because when $\\kappa$ is small (say 0.1 or less) the kernel is similar in shape to the delta function. ", "body": "<p>Consider the integral equation:\n$$\ny(x)\n=1+\\int_0^cK(x,t)\\,y(t)\\,dt,\n$$\nwhere $x\\ge0$ and\n$$\nK(x,t)\n=\n\\frac{\\partial}{\\partial t}\\Phi\\left(\\frac{1}{\\kappa}\\log\\left(\\frac{t}{1+x}\\right)-\\frac{\\kappa}{2}\\right),\n$$\nwith $\\Phi(z)$ denoting the cdf of the standard Gaussian distribution, $\\kappa>0$.</p>\n\n<p>The equation does not allow for an exact closed-form analytical solution. To solve it numerically one can, e.g., employ the collocation method with the functional basis chosen as $\\psi_j(x)=\\boldsymbol{1}_{x\\in[x_j,x_{j+1})}$. This is convenient because the kernel is a derivative, and so the integral operator will end up becoming a matrix that is nicely computable. It can be shown that the rate of convergence in this case is actually quadratic (even though the interpolation basis is formed of polynomials of degree 0). That's the theory. In practice the second order rate of convergence does not take effect until the partition size is of order $10^4$, which is rather high. I am thinking of a different approach now. For instance, approximating the kernel with a degenerate kernel seems a plausible idea to me. The question is are there known good approximations to log-normal distributions (the kernel is effectively a log-normal distribution) that would allow to achieve variable separation?</p>\n\n<p>Or perhaps there is a better approach to treat the above equation? Any help would be appreciated.</p>\n\n<p>Update: The MATLAB package developed by K. Atkinson doesn't quite work here, as it is based on the Nystrom method, and that in turn effectively attempts to linearize the kernel itself which is not a good idea in this case because when $\\kappa$ is small (say 0.1 or less) the kernel is similar in shape to the delta function. </p>\n", "question_id": 756436, "owner": {"profile_image": "https://www.gravatar.com/avatar/646572476f6ae98f7f926663bf624ff0?s=128&d=identicon&r=PG", "reputation": 56, "display_name": "Jason", "user_id": 112144, "link": "http://math.stackexchange.com/users/112144/jason", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756436/numerial-solution-to-fredholm-integral-equation", "tags": ["numerical-methods", "approximation", "integral-equations"], "creation_date": 1397660197}, {"title": "Simpson's Rule - Error of $O(h^5)$", "body_markdown": "The following is given:\r\n\r\nWhen deriving Simpson's Rule using the same approach as that of the Trapezoidal rule, it was stated that the method only generates an error term of $O(h^5)$ involving $f^{(3)}$. Devise an approach that can be used to show the error term should be $O(h^5)$ .\r\n\r\n\r\n\r\nI went about showing it in the following way:\r\n\r\nExpanded $f(x)$ as a $3^{rd}$ degree Taylor polynomial and then integrating accordingly, that brings me to the correct result.\r\n\r\nHowever, my Professor said that is not what he wants us to do for that question. Is there something that I am missing? Are there any alternative ways to show that it must have an error term of $O(h^5)$?", "body": "<p>The following is given:</p>\n\n<p>When deriving Simpson's Rule using the same approach as that of the Trapezoidal rule, it was stated that the method only generates an error term of $O(h^5)$ involving $f^{(3)}$. Devise an approach that can be used to show the error term should be $O(h^5)$ .</p>\n\n<p>I went about showing it in the following way:</p>\n\n<p>Expanded $f(x)$ as a $3^{rd}$ degree Taylor polynomial and then integrating accordingly, that brings me to the correct result.</p>\n\n<p>However, my Professor said that is not what he wants us to do for that question. Is there something that I am missing? Are there any alternative ways to show that it must have an error term of $O(h^5)$?</p>\n", "question_id": 756434, "owner": {"profile_image": "https://www.gravatar.com/avatar/e759a30f32534af9a1ba978649f95d22?s=128&d=identicon&r=PG&f=1", "reputation": 6, "display_name": "Dillon", "user_id": 137485, "link": "http://math.stackexchange.com/users/137485/dillon", "user_type": "registered"}, "link": "http://math.stackexchange.com/questions/756434/simpsons-rule-error-of-oh5", "tags": ["homework", "integration", "numerical-methods"], "creation_date": 1397660156}, {"title": "$\\left\\{\\frac{\\pi}{6}+\\frac{2K\\pi}{3}\\Big\\vert K\\in\\mathbb {Z}\\right\\}\\cap\\left\\{\\frac{\\pi}{3}+\\frac{K\\pi}{2}\\Big\\vert K\\in\\mathbb {Z}\\right\\}=$?", "body_markdown": "$$\\left\\{\\frac{\\pi}{6}+\\frac{2K\\pi}{3}\\,\\Big\\vert\\, K\\in\\mathbb {Z}\\right\\}\\cap\\left\\{\\frac{\\pi}{3}+\\frac{K\\pi}{2}\\,\\Big\\vert\\, K\\in\\mathbb {Z}\\right\\}=\\varnothing$$\r\n\r\nIs my answer right? If not, why?", "body": "<p>$$\\left\\{\\frac{\\pi}{6}+\\frac{2K\\pi}{3}\\,\\Big\\vert\\, K\\in\\mathbb {Z}\\right\\}\\cap\\left\\{\\frac{\\pi}{3}+\\frac{K\\pi}{2}\\,\\Big\\vert\\, K\\in\\mathbb {Z}\\right\\}=\\varnothing$$</p>\n\n<p>Is my answer right? If not, why?</p>\n", "question_id": 756432, "owner": {"profile_image": "https://www.gravatar.com/avatar/a66a17153621153ffb47af3f03a2aae1?s=128&d=identicon&r=PG", "reputation": 1, "display_name": "user143592", "user_id": 143592, "link": "http://math.stackexchange.com/users/143592/user143592", "user_type": "unregistered"}, "link": "http://math.stackexchange.com/questions/756432/left-frac-pi6-frac2k-pi3-big-vert-k-in-mathbb-z-right-cap-left", "tags": ["homework", "elementary-set-theory"], "creation_date": 1397660141}]