Your browser does not support JavaScript!
106/8/30(三)AM9:00專題演講 及 8/30-9/1短期課程 by 荷蘭艾恩德霍芬理工大學 Prof. Bart ter Haar Romeny

[[8/30(三)AM9:00-10:30專題演講]] 海報如附。

 Deep Learning with Convolutional Neural Networks

Prof. Bart ter Haar Romeny,

Eindhoven University of Technology, Netherlands 

 

30th Aug. 2017 (Wed.) 9:00-10:30 am

TR-212, Taiwan Tech

 

 

Abstract:

 

Deep learning is one of the fastest growing branches in machine learning, due to its spectacular performance in human cognitive tasks. Its main implement-tation is through ‘convolutional neural networks (CNNs). A typical CNN has many layers (is ‘deep’). In 2012 the challenge to classify the images of the ImageNet database with 14 million images was won with a strikingly better performance than earlier methods. The deep structure of many convolutional layers is also recognized in our human visual perception. CNNs find applications in scene recognition, self-driving cars, medical diagnosis, translations etc. The technology is feasible, as today we have abundant computing power, and access to big data. It is embraced by the biggest companies (Apple, Google, Facebook, Baidu), and is rapidly transforming many areas of our technological society.

 

Biography:

Bart ter Haar Romeny (1952) is professor in biomedical image analysis. He has experience in biologically-inspired computer vision research and computer-aided diagnosis applications for over 25 years. He received the MSc degree in applied physics from Delft University of Technology in 1978, PhD from Utrecht University in 1983 in biophysics. He closely collaborates with industries and (national and international) hospitals and research groups. Currently he is project leader of the Sino-Dutch RetinaCheck project, a large screening project for early detection of diabetic retinopathy in Liaoning, China.

He is an enthusiastic educator. He authored an interactive tutorial book on multi-scale computer vision techniques, written in Mathematica, edited a book on non-linear diffusion theory in computer vision and is involved in (resp. initiated) a number of international collaborations on these subjects. He is author/co-author of over 200 refereed journal and conference papers, 12 books and book chapters, and holds 2 patents. He supervised 29 PhD students, of which 4 graduated cum laude, and over 140 Master students. He is senior member of IEEE, associate member of the Chinese Brainnetome consortium, visiting professor at the Chinese Academy of Sciences in Beijing, member of the Governing Board of IAPR, Fellow of EAMBES, and chairman of the Dutch Society for Pattern Recognition and Image Processing.

~歡迎踴躍參加~

 

---------------------------------------------------------------------------------------------------------

[[8/30-9/1短期課程]]

 

Course Code & Name: ET5924701電子工程專論()  (Special Topics on Electronic Engineering (4))

Title: Deep Learning with Convolutional Neural Networks..

Instructor: Prof. Bart ter Haar Romeny, PhD
Eindhoven University of Technology, the Netherlands & Northeastern University, Shenyang, China

https://nl.linkedin.com/in/bartterhaarromeny

Class dates & Class hours & Class Room:

30th Aug. 2017 (Wed.) -1st  Sep. 2017 (Fri.) 09:10-12:10 &13:20-17:00 (2,3,4&6,7,8,9) (Every day)

Duration: 18 hours

Credit: 1

Course language: English

Class Room: TR-212

Description

Deep learning is one of the fastest growing branches in machine learning, due to its spectacular performance in human cognitive tasks. The neural network has many layers (‘deep’), and its main implementation is through ‘convolutional neural networks (CNNs)’. In 2012 the challenge to classify the images of the ImageNet database with 14 million image was won with a strikingly better performance that earlier methods. The deep structure of many convolutional layers is also recognized in our human visual perception. CNNs find applications in scene recognition, self-driving cars, medical diagnosis, translation etc. The technology is feasible, as today we have abundant computing power, and access to big data. It is embraced by the biggest companies (Apple, Google, Facebook, Baidu), and is rapidly transforming many areas of our technological society.

This course will give a step-by-step introduction to deep neural networks. We will discuss the terminology of many concepts, study the famous papers by the inventors, and implement our first steps in CNNs on some instructive toy databases, such as MNIST for handwritten digit recognition. As this field is also known as ‘brain-inspired computing’, attention is also paid to models of human visual perception. In the course, many real-world examples will be discussed and explained. We will discuss a number of well-established mathematical modeling techniques in detail, in particular multi-scale and multi-orientation differential geometry, models for self-organization and plasticity, and geometric neural feedback, leading to effective adaptive operations. We present the theory in an axiomatic, intuitive and fundamentally understood way.

The lectures will be in the morning, in the afternoon we practice all concepts in a computer lab.

The course is concluded with a written exam.

 

This is a short intensive course of three full days, where each morning of lectures is followed by a computer lab in the afternoon, to bring the concepts to life (all software code is supplied). We exploit the high-level 'play and design' functionality of Mathematica 11 and Matlab. Students will work in small groups on the assignments.

 

Credit points

Students are required to present their project solutions during the class.

 

Target audience

Students interested in deep learning, brain-inspired computing, and modern digital image processing algorithms.

 

Required skills

The lecture is designed to be self-contained. Mathematical knowledge and basic mathematical skills, i.e. matrix / vector computations and some differential geometry will be helpful.

 

Literature

[1]    Michael Nielsen, Neural Networks and Deep Learning. Free online book:

http://neuralnetworksanddeeplearning.com/, Jan 2017.

[2]    Yann LeCun, Yoshua Bengio and Geoffrey Hinton, Deep learning. Nature 521, 436–444 (28 May 2015).

[3]    David Hubel, Eye, Brain and Vision. Free online book: http://hubel.med.harvard.edu/, 1988.

 

Outline

Wednesday:

·         History of neural networks, perceptrons

·         Machine learning and pattern recognition

o   Supervised and unsupervised learning

o   Feature detection, famous features

o   Cluster analysis in feature space, classifiers

·         Deep Learning, Big Data and Graphical Processing Units (GPUs)

o   Challenges in Deep Learning, famous databases and challenges

·         Classification, detection and semantic segmentation

·         The concept of convolution

·         Deep Convolutional Neural Networks

Thursday:

·         Principal component analysis

·         Convolutional layer

·         Max and mean pooling layer

·         Rectifying linear units

·         Fully connected layer

·         Error backpropagation and gradient descent learning

·         Implementations: Caffe, TensorFlow, Torch, Mathematica, Matlab

·         Learn to recognize handwritten digits with the MNIST dataset

Friday:

·         Regularization

·         Data augmentation

·         Recurrent neural networks

·         Network visualizations

·         The cascade of neural network layers in the visual system

·         A clinical application: Screening retinal damage from diabetes by deep learning

The course exam is a two-hour written exam with open questions (in English).

 

 

瀏覽數  
將此文章推薦給親友
請輸入此驗證碼
Voice Play
更換驗證碼