NEWS

Senior Projects Poster Session
CMPE Senior Project Poster Session was held on Thursday, May 31, 2018.   Read more...
Data for Refugees
Türk Telekom, TÜBİTAK and Boğaziçi University initiated the "D4R – Data for Read more...
EU Funding for Full-time Msc/Phd Positions in Cognitive Robotics and Robot Learning
Project name: IMAGINE: Robots Understanding Their Actions by Imagining Their Read more...
Special 6-week training course organized with Havelsan: "Introduction to Machine Learning and Data Analysis"

CmpE Events

Yesterday

  1. TETAM PhD Seminars
    • Start time: 10:00am, Friday, February 22nd
    • End time: 11:30am, Friday, February 22nd
    • Where: AVS Conference Room, BM
    • Title: Exploring Temporal Accumulative Features for Sign Language Recognition
      Speaker: Ahmet Alp Kındıroğlu

      Title: The Determination of Optimum Radiation Treatment Plans
      Speaker: Pınar Dursun

      http://tetam.boun.edu.tr/sites/default/files/2019-02-22_TETAM_PhD_Seminars_vol-1.jpg

      http://tetam.boun.edu.tr/sites/default/files/tetam_doktora_seminerleri_hafta_2019_spring.jpg

    • View this event in Google Calendar

Tuesday, February 26th

  1. CmpE Seminar: General Overview of Deep Learning And Its Applications in Self Driving Cars and Speech processing by Cevahir Parlak
    • Start time: 12:00pm, Tuesday, February 26th
    • End time: 01:00pm, Tuesday, February 26th
    • Where: AVS Conference Room, BM
    • Abstract:
      Major Architectures of Deep Networks
      There are three major architectures of deep networks and how we use the smaller networks to build them.
      Unsupervised Pretrained Networks (UPNs)
      o Autoencoders and Variational AutoEncoders
      o Deep Belief Networks (DBNs)
      o Generative Adversarial Networks (GANs)

      Convolutional Neural Networks (CNNs)
      The goal of a CNN is to learn higher-order features in the data via convolutions. They are well suited to object recognition with images and consistently top image classification competitions. They can identify faces, individuals, street signs, and many other aspects of visual data. CNNs overlap with text analysis via optical character recognition, but they are also useful when analyzing words as discrete textual units. They’re also good at analyzing sound.

      Recurrent Neural Networks (RNNs)
      Recurrent Neural Networks are in the family of feed-forward neural networks. They are different from other feed-forward networks in their ability to send information over time-steps. Historically, these networks have been difficult to train, but more recently, advances in research (optimization, network architectures, parallelism, and graphics processing units [GPUs]) have made them more approachable for the practitioner. LSTM networks are the new way to handle time series data such as speech.

      Androcars (Robocars, Self Driving Cars, Autonomous Cars) in recent decades gained
      substantial advances in parallel to the developments in computer vision
      and machine learning applications. In particular, we saw striking developments
      in the last twenty years. Today Androcars are running their test drives on the public roads.

      In the last few years advances in the deep learning paved the way to run speech recognition classification without using human-engineered features and gained great success. Classification without using man-made features produced very good results even sometimes higher than the results of mfcc or mel filter banks. LSTM and CNNs can be used for speech recognition applications.

    • View this event in Google Calendar

Contact us

Department of Computer Engineering, Boğaziçi University,
34342 Bebek, Istanbul, Turkey

  • Phone: +90 212 359 45 23/24
  • Fax: +90 212 2872461
 

Connect with us

We're on Social Networks. Follow us & get in touch.