博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
人工智能资料库:第65辑(20170805)
阅读量:2440 次
发布时间:2019-05-10

本文共 4343 字,大约阅读时间需要 14 分钟。

作者:chen_h

微信号 & QQ:862251340
微信公众号:coderpai


1.【Quora】Rank the most important factors during a PhD, which will increase your probability of finding a good faculty position?

简介:

A2A: I’s hard to make a simple rank-ordered list here because these factors combine in complicated, non-linear ways. So let me just say what we end up discussing in our faculty-hiring process in various departments in CMU SCS. Perhaps that will give you some idea about what you should work on.

原文链接:


2.【博客】Faces recreated from monkey brain signals

简介:

The brains of primates can resolve different faces with remarkable speed and reliability, but the underlying mechanisms are not fully understood.

The researchers showed pictures of human faces to macaques and then recorded patterns of brain activity.

The work could inspire new facial recognition algorithms, they report.

In earlier investigations, Professor Doris Tsao from the California Institute of Technology (Caltech) and colleagues had used functional magnetic resonance imaging (fMRI) in humans and other primates to work out which areas of the brain were responsible for identifying faces.

Six areas were found to be involved, all of which are located in part of the brain known as the inferior temporal (IT) cortex. The researchers described these six areas as “face patches”.

原文链接:


3.【论文】A Fast Unified Model for Parsing and Sentence Understanding

简介:

Tree-structured neural networks exploit valuable syntactic parse information as they interpret the meanings of sentences. However, they suffer from two

key technical problems that make them slow and unwieldy for large-scale NLP tasks: they usually operate on parsed sentences and they do not directly support batched computation. We address these issues by introducing the Stackaugmented Parser-Interpreter Neural Network (SPINN), which combines parsing and interpretation within a single treesequence hybrid model by integrating treestructured sentence interpretation into the
linear sequential structure of a shift-reduce parser. Our model supports batched computation for a speedup of up to 25× over other tree-structured models, and its integrated parser can operate on unparsed data with little loss in accuracy. We evaluate it on the Stanford NLI entailment task and show that it significantly outperforms other sentence-encoding models.

原文链接:


4.【博客】Hierarchical Softmax

简介:

Hierarchical softmax is an alternative to the softmax in which the probability of any one outcome depends on a number of model parameters that is only logarithmic in the total number of outcomes. In “vanilla” softmax, on the other hand, the number of such parameters is linear in the number of total number of outcomes. In a case where there are many outcomes (e.g. in language modelling) this can be a huge difference. The consequence is that models using hierarchical softmax are significantly faster to train with stochastic gradient descent, since only the parameters upon which the current training example depend need to be updated, and less updates means we can move on to the next training example sooner. At evaluation time, hierarchical softmax models allow faster calculation of individual outcomes, again because they depend on less parameters (and because the calculation using the parameters is just as straightforward as in the softmax case). So hierarchical softmax is very interesting from a computational point-of-view. By explaining it here, I hope to convince you that it is also interesting conceptually. To keep things concrete, I’ll illustrate using the CBOW learning task from word2vec (and fasttext, and others).

原文链接:


5.【博客】How to Visualize Your Recurrent Neural Network with Attention in Keras

简介:

Neural networks are taking over every part of our lives. In particular — thanks to deep learning — Siri can fetch you a taxi using your voice; and Google can enhance and organize your photos automagically. Here at , we use deep learning to structurally and semantically understand data, allowing us to prepare it for use automatically.

Neural networks are massively successful in the domain of . Specifically, (CNNs) take images and extract relevant features from them by using small windows that travel over the image. This understanding can be leveraged to identify objects from your camera () and, in the future, even drive your car ().

原文链接:


转载地址:http://qldqb.baihongyu.com/

你可能感兴趣的文章
debian 服务器_使用Debian 10进行初始服务器设置
查看>>
joi 参数验证_使用Joi进行节点API架构验证
查看>>
react-notifications-component,一个强大的React Notifications库
查看>>
如何在Debian 10上设置SSH密钥
查看>>
如何在Debian 10上安装Node.js
查看>>
配置管理规范 配置管理计划_配置管理简介
查看>>
如何在Ubuntu 18.04上添加和删除用户
查看>>
angular4前后端分离_如何在Angular 4+中使用Apollo客户端GraphQL
查看>>
如何在Ubuntu 18.04上安装Apache Kafka
查看>>
如何在Ubuntu 20.04上安装R [快速入门]
查看>>
debian tomcat_如何在Debian 10上安装Apache Tomcat 9
查看>>
如何为Python 3设置Jupyter Notebook
查看>>
docker 容器共享数据_如何在Docker容器之间共享数据
查看>>
express中间件_创建自己的Express.js中间件
查看>>
如何使用远程Docker服务器加快工作流程
查看>>
如何在Ubuntu 20.04上从源代码安装Git [快速入门]
查看>>
flask-login_如何使用Flask-Login向您的应用程序添加身份验证
查看>>
如何在Ubuntu 20.04上安装和配置Nextcloud
查看>>
react前端ui的使用_如何使用React和Semantic UI创建多步骤表单
查看>>
chrome开发者工具_如何使用Chrome开发者工具查找性能瓶颈
查看>>