博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Probabilistic Graphical Models 1: Introduction
阅读量:4640 次
发布时间:2019-06-09

本文共 2674 字,大约阅读时间需要 8 分钟。

This course is offered by Prof. Daphne Koller from Stanford University and Coursera. The class webpage is here:

 

Daphne Koller is a professor in the Department of Computer Science at Stanford University and a MacArthur Fellowship recipient. Her general research area is artificial intelligence and its application in the biomedical sciences. In 2009 she published a  on Probabilistic Graphical Models together with Nir Friedman.

 

Prerequisites of this course: probability and statistics, machine learning, Matlab or GNU Octave.

 

A probabilistic graphical model (PGM for short) is a probabilistic model for which a graph denotes the conditional independence structure between random variables. It is an advanced topic in machine learning. Since modern machine learning models are nearly all probabilistic and statistically learned, and graph structure is efficient in demonstrating template models, this technique is widely applied. The applications of PGM include medical diagnosis, fault diagnosis, natural language processing, traffic analysis, social network models, message decoding, computer vision (image segmentation, 3D reconstruction, holistic scene analysis), speech recognition, robot localization & mapping, etc.

 

This course will teach fundamental methods in PGM, as well as some real-world applications, like medical diagnosis systems. Also, by accomplishing programming assignments in this course, we are able to use these methods in our work. This is perhaps the most exciting but at the same time, most time-consuming part of this course. The overview of this course will be three parts: representation, inference and learning. Representation introduces graph structures and some basic terms and properties. Inference is how we shall use a trained model to predict results and make decisions. Learning refers to the procedure we build the model and train its parameters, given training data.

 

Graphical models are divided into two categories: Bayesian networks which are denoted by directed graphs and Markov networks which are denoted by undirected graphs. We shall see that these two categories vary a lot. And both of them have many applications.

 

Factor is a basic concept. In a joint distribution, we may break up the distribution into smaller components, each over a smaller space of possibilities. We can then define the overall joint distribution as a product of these components, or factors. And we define the scope of a factor as the set of random variables it is related to. Factor marginalization and factor reduction are almost the same with marginalization and reduction in joint distribution.

转载于:https://www.cnblogs.com/JVKing/articles/2478304.html

你可能感兴趣的文章
Archlinux GNOME 3 操作习惯的变更
查看>>
visual studio 2005 常用按键
查看>>
2019 Multi-University Training Contest 1 - 1012 - NTT
查看>>
谭浩强C程序设计(第四版)例题中的代码优化
查看>>
浏览器调试淘宝首页看到有趣的招聘信息
查看>>
ASP.NET Identity “角色-权限”管理 4
查看>>
[转][译]ASP.NET MVC 4 移动特性
查看>>
SOC CPU
查看>>
get_result --perl
查看>>
163镜像地址
查看>>
ehcache memcache redis 三大缓存男高音
查看>>
eclipse 快捷键Open Implementation 直接退出
查看>>
minix中管道文件和设备文件的读写
查看>>
JAXB - Annotations, Annotations for Enums: XmlEnum, XmlEnumValue
查看>>
context 插图
查看>>
文件管理器中不支持的wma歌曲也显示可以播放的音乐图标
查看>>
Java基础学习-流程控制语句
查看>>
Shell中read的常用方式
查看>>
01javascript数据类型
查看>>
asp.net实现md5加密方法详解
查看>>