Jekyll2020-06-23T14:03:20-07:00https://rflperry.github.io/feed.xmlRonan Perrypersonal descriptionRonan Perryrperry27@jhu.eduImpact Fellowship2020-01-19T00:00:00-08:002020-01-19T00:00:00-08:00https://rflperry.github.io/posts/Impact_Fellowship<h1 id="the-impact-labs-fellowship">The Impact Labs Fellowship</h1>
<p>In the start of January 2020, I had the pleasure and the fortunate of being selected and participating in the <a href="https://www.impactlabs.io/fellowship/">Impact Labs
Fellowship</a> as a 2020 Fellow. This fellowship is a two-week-long software engineering and
social entrepreneurship bootcamp hosted in New York City. I was joined by approximately 45 other individuals from diverse backgroundd, all
with shared computer science experience and a desire for our work to contribute to the social good.</p>
<p>It was a great experience in many ways. First, I was able to meet and network with many like-minded individuals from across North America
with difference life experiences and perspectives. We connected with speakers and leaders from NGOs, social startups, think tanks, and
philanthropic organizations. My personal favorite speakers included:</p>
<ul>
<li><a href="https://www.nytimes.com/by/rich-harris">Rich Harris</a>, a grpahics editor on the NYTimes investigative team whose articles I had read
and enjoyed immensely.</li>
<li><a href="https://www.techcongress.io/leadership/julie-samuels">Julie Samuels</a>, the exectutive director of Tech:NYC which represents New York’s
high-tech industry with government, civic institutions, in business and public policy forums, and with the media.</li>
<li><a href="http://lav.io/">Sam Lavigne</a>, an artist and educator who creates software project commentaries on data, surveillance, and various
technologies. He has some interesting courses online which I am inspired to check out.</li>
</ul>
<p>In additon to these opportunities, numerous individuals came to teach courses on advanced software engineering and web development
methods and ideas including D3, React, Kubernetes/Docker, PyTorch, and cyber secure development protocols.</p>
<p>This all concluded with a <a href="https://github.com/DesuImudia/Unwritten-Languages/tree/frontend">final project</a>. Working in a team of four, my group created a webapp prototype to targeting the problem of the
extinction of unwritten languages. I built an interactive webapp frontend as well as backend in React, and constructing a PostgreSQL
database to store auditory recordings and information about words from unwritten languages. This information I vizualized on an interactive
world map using Leaflet.</p>Ronan Perryrperry27@jhu.eduThe Impact Labs FellowshipGershgorin Circle Theorem Visualization2019-11-30T00:00:00-08:002019-11-30T00:00:00-08:00https://rflperry.github.io/posts/Gershgorin<h1 id="gershgorin-circle-theorem-visualization">Gershgorin Circle Theorem Visualization</h1>
<p>The Gershgorin circle theorem bounds the eigenvalues of a square matrix within Gershgorin discs.
Each disc is a circle centered at the <script type="math/tex">ith</script> diagonal element with radius equal to the sum of the absolute values
of the <script type="math/tex">ith</script> row elements. In the following visualization, the eigenvalues and discs of matrix <script type="math/tex">A = (1-t)D + tN</script> are
shown as the eigenvalues are continuous in <script type="math/tex">t</script> as it varies from 1 to 0. <script type="math/tex">D</script> is a diagonal matrix entries equal to the
diagaonal elements of <script type="math/tex">N</script>.</p>
<p><img src="/files/gershgorin.gif" alt="gif" /></p>Ronan Perryrperry27@jhu.eduGershgorin Circle Theorem VisualizationPredictive Analysis and Circuit Board Failures2019-06-20T00:00:00-07:002019-06-20T00:00:00-07:00https://rflperry.github.io/posts/PredictiveAnalysis<h1 id="predictive-analysis-and-circuit-board-failures">Predictive Analysis and Circuit Board Failures</h1>
<p>Recently a friend came to me with a question. He had been working on testing <em>expensive</em> circuit boards and of the ten he tested, two didn’t work. His question for me was what was the probability of future ones working. This turns out to be a question of predictive analysis. Assuming that the boards work or don’t work according to a binomial distribution (i.e. a coin toss), the probability of success <script type="math/tex">\hat\theta = \frac{8}{10}</script> is the maximum likelihood estimate. So, the probability of the next board working is simply <script type="math/tex">\frac{8}{10}</script>. However, asking questions about more than the next board is a bit more tricky. Given our observations <script type="math/tex">\mathbf{Z}</script>, what is the probability of a set of new observations <script type="math/tex">z^{new}</script>?</p>
<p>In the simple maximum likelihood approach, we have estimated our parameter <script type="math/tex">\hat\theta = \frac{8}{10}</script> and so simply calculate the probability of new observations as <script type="math/tex">Pr(z^{new} \mid \mathbf{Z}) = Pr(z^{new} \mid \hat\theta)</script>. However, this fails to take into account our uncertainty of our estimate for <script type="math/tex">\theta</script>.</p>
<p>In a Bayesian setting, however, we do take into account the uncertainty of our parameter. By Bayes Theorem, we have the equation for the posterior distribution</p>
<script type="math/tex; mode=display">Pr(\theta) \mid \mathbf{Z}) = \frac{Pr(\mathbf{Z} \mid \theta) Pr(\theta)}{Pr(\mathbf{Z})} = \frac{Pr(\mathbf{Z} \mid \theta) Pr(\theta)}{\int Pr(\mathbf{Z} \mid \theta^\prime) Pr(\theta^\prime) d\theta^\prime}</script>
<p>which tells us the likelihood of our parameter given our data. This is factored into our predictive distribution</p>
<p><script type="math/tex">Pr(z^{new} \mid \mathbf{Z}) = \int Pr(z^{new} \mid \theta) Pr(\theta \mid \mathbf{Z}) d\theta</script>.</p>
<p>First, since the distribution of our observations is binomial, the conditional likelihood of <em>s</em> successes and <em>f</em> failures is</p>
<script type="math/tex; mode=display">Pr (s,f \mid \theta) = {s + f \choose s} \theta^{s} (1-\theta)^{f}</script>
<p>However, <script type="math/tex">\theta</script> is just an estimate from the observation and so it has its own distribution. The Beta distribution is a commonly used prior distribution, positive only on <script type="math/tex">[0,1]</script> and having a flexible shape dependent on two shape parameters <script type="math/tex">\alpha</script> and <script type="math/tex">\beta</script>.</p>
<script type="math/tex; mode=display">Pr(\theta; \alpha, \beta) = \frac{1}{B(\alpha, \beta)} \theta^{\alpha-1}(1-\theta)^{\beta - 1} \quad\text{and}\quad B(\alpha, \beta) = \frac{\Gamma(\alpha) \Gamma(\beta)}{\Gamma(\alpha + \beta}</script>
<p>where <script type="math/tex">\Gamma(x)</script> is the gamma distribution. Our best guess for <script type="math/tex">\alpha</script> and <script type="math/tex">\beta</script> are based on our observations. Since the mean of the Beta distribution <script type="math/tex">\mathbb{E}[\theta] = \frac{\alpha}{\alpha + \beta}</script> and our estimated <script type="math/tex">\theta = 0.8</script>, we set <script type="math/tex">\alpha = 8</script> and <script type="math/tex">\beta = 2</script>.</p>
<p>Recalling that we are interested in the predictive probability of <em>s</em> future successes and <em>f</em> failures, we plug these equations back into Bayes theorem and find our posterior distribution to be</p>
<script type="math/tex; mode=display">Pr(\theta \mid s,f) = \frac{1}{B(s+\alpha, f+\beta)}\theta^{s+\alpha-1}(1-\theta)^{f+\beta-1}</script>
<p>As it turns out, our posterior distribution is another Beta distribution, <script type="math/tex">Beta(s+\alpha, f+\beta)</script>. This is because the Beta distribution is the conjugate prior for the Bernoulli likelihood function, among other. If the posterior and the prior distributions are in the same probability distribution families, we call them conjugate.</p>
<p>Factoring this into the predictive distribution, we find the probability of <em>s</em> future successes and <em>f</em> future failures to be</p>
<script type="math/tex; mode=display">Pr(s,f; \alpha, \beta) = {s + f \choose s} \frac{B(s + \alpha, f + \beta)}{B(\alpha, \beta)}</script>
<p>where <script type="math/tex">\alpha</script> and <script type="math/tex">\beta</script> come from the observations <script type="math/tex">\mathbf{Z}</script>. For a single success, the probability is <script type="math/tex">\mathbb{E} [\theta]</script> just like our maximum likelihood estimation. For more than one success or failure, the probabilities differ. By applying a Bayesian approach, we are able to factor in the uncertainty of our parameter’s estimate into our calculations.</p>Ronan Perryrperry27@jhu.eduPredictive Analysis and Circuit Board FailuresBlog Post number 12012-08-14T00:00:00-07:002012-08-14T00:00:00-07:00https://rflperry.github.io/posts/2012/08/cocktails<p>This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.</p>
<h1 id="headings-are-cool">Headings are cool</h1>
<h1 id="you-can-have-many-headings">You can have many headings</h1>
<h2 id="arent-headings-cool">Aren’t headings cool?</h2>Ronan Perryrperry27@jhu.eduThis is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.