Next Page: 10000

          Diagnosing Heart Diseases with Deep Neural Networks      Cache   Translate Page      

The Second National Data Science Bowl, a data science competition where the goal was to automatically determine cardiac volumes from MRI scans, has just ended. We participated with a team of 4 members from Ghent University and finished 2nd!

The team kunsthart (artificial heart in English) consisted of Ira Korshunova, Jeroen Burms, Jonas Degrave, 3 PhD students, and professor Joni Dambre. It’s also a follow-up of last year’s team ≋ Deep Sea ≋, which finished in first place for the First National Data Science Bowl.


This blog post is going to be long, here is a clickable overview of different sections.


The problem

The goal of this year’s Data Science Bowl was to estimate minimum (end-systolic) and maximum (end-diastolic) volumes of the left ventricle from a set of MRI-images taken over one heartbeat. These volumes are used by practitioners to compute an ejection fraction: fraction of outbound blood pumped from the heart with each heartbeat. This measurement can predict a wide range of cardiac problems. For a skilled cardiologist analysis of MRI scans can take up to 20 minutes, therefore, making this process automatic is obviously useful.

Unlike the previous Data Science Bowl, which had very clean and voluminous data set, this year’s competition required a lot more focus on dealing with inconsistencies in the way the very limited number of data points were gathered. As a result, most of our efforts went to trying out different ways to preprocess and combine the different data sources.

The data

The dataset consisted of over a thousand patients. For each patient, we were given a number of 30-frame MRI videos in the DICOM format, showing the heart during a single cardiac cycle (i.e. a single heartbeat). These videos were taken in different planes including the multiple short-axis views (SAX), a 2-chamber view (2Ch), and a 4-chamber view (4Ch). The SAX views, whose planes are perpendicular to the long axis of the left ventricle, form a series of slices that (ideally) cover the entire heart. The number of SAX slices ranged from 1 to 23. Typically, the region of interest (ROI) is only a small part of the entire image. Below you can find a few of SAX slices and Ch2, Ch4 views from one of the patients. Red circles on the SAX images indicate the ROI’s center (later we will explain how to find it), for Ch2 and Ch4 they specify the location of SAX slices projected on the corresponding view.

sax_5 sax_9 sax_10 sax_11 sax_12 sax_15
2Ch 4Ch

The DICOM files also contained a bunch of metadata. Some of the metadata fields, like PixelSpacing and ImageOrientationm were absolutely invaluable to us. The metadata also specified patient’s age and sex.

For each patient in the train set, two labels were provided: the systolic volume and the diastolic volume. From what we gathered (link), these were obtained by cardiologists by manually performing a segmentation on the SAX slices, and feeding these segmentations to a program that computes the minimal and maximal heart chamber volumes. The cardiologists didn’t use the 2Ch or 4Ch images to estimate the volumes, but for us they proved to be very useful.

Combining these multiple data sources can be difficult, however for us dealing with inconsistencies in the data was more challenging. Some examples: the 4Ch slice not being provided for some patients, one patient with less than 30 frames per MRI video, couple of patients with only a handful of SAX slices, patients with SAX slices taken in weird locations and orientations.

The evaluation

Given a patient’s data, we were asked to output a cumulative distribution function over the volume, ranging from 0 to 599 mL, for both systole and diastole. The models were scored by a Continuous Ranked Probability Score (CRPS) error metric, which computes the average squared distance between the predicted CDF and a Heaviside step function representing the real volume.

An additional interesting novelty of this competition was the two stage process. In the first stage, we were given a training set of 500 patients with a public test set of 200 patients. In the final week we were required to submit our model and afterwards the organizers released the test data of 440 patients and labels for 200 patients from the public test set. We think the goal was to compensate for the small dataset and prevent people from optimizing against the test set through visual inspection of every part of their algorithm. Hand-labeling in the first stage was allowed on the training dataset only, for the second stage it was also allowed for 200 validation patients.

The solution: traditional image processing, convnets, and dealing with outliers

In our solution, we combined traditional image processing approaches, which find the region of interest (ROI) in each slice, with convolutional neural networks, which perform the mapping from the extracted image patches to the predicted volumes. Given the very limited number of training samples, we tried combat overfitting by restricting our models to combine the different data sources in predefined ways, as opposed to having them learn how to do the aggregation. Unlike many other contestants, we performed no hand-labelling .

Pre-processing and data augmentation

The provided images have varying sizes and resolutions, and do not only show the heart, but the entire torso of the patient. Our preprocessing pipeline made the images ready to be fed to a convolutional network by going through the following steps:

  • applying a zoom factor such that all images have the same resolution in millimeters
  • finding the region of interest and extracting a patch centered around it
  • data augmentation
  • contrast normalization

To find the correct zooming factor, we made use of the PixelSpacing metadata field, which specifies the image resolution. Further we will explain our approach to ROI detection and data augmentation.

Detecting the Region Of Interest through image segmentation techniques

We used classical computer vision techniques to find the left ventricle in the SAX slices. For each patient, the center and width of the ROI were determined by combining the information of all the SAX slices provided. The figure below shows an example of the result.

ROI extraction steps
ROI extraction steps

First, as was suggested in the Fourier based tutorial, we exploit the fact that each slice sequence captures one heartbeat and use Fourier analyses to extract an image that captures the maximal activity at the corresponding heartbeat frequency (same figure, second image).

From these Fourier images, we then extracted the center of the left ventricle by combining the Hough circle transform with a custom kernel-based majority voting approach across all SAX slices. First, for each fourier image (resulting from a single sax slice), the highest scoring Hough circles for a range of radii were found, and from all of those, the highest scoring ones were retained. , and the range of radii are metaparameters that severely affect the robustness of the ROI detected and were optimised manually. The third image in the figure shows an example of the best circles for one slice.

Finally, a ‘likelihood surface’ (rightmost image in figure above) was obtained by combining the centers and scores of the selected circles for all slices. Each circle center was used as the center for a Gaussian kernel, which was scaled with the circle score, and all these kernels were added. The maximum across this surface was selected as the center of the ROI. The width and height of the bounding box of all circles with centers within a maximal distance (another hyperparameter) of the ROI center were used as bounds for the ROI or to create an ellipsoidal mask as shown in the figure.

Given these ROIs in the SAX slices, we were able to find the ROIs in the 2Ch and 4Ch slices by projecting the SAX ROI centers onto the 2Ch and 4Ch planes.

Data augmentation

As always when using convnets on a problem with few training examples, we used tons of data augmentation. Some special precautions were needed, since we had to preserve the surface area. In terms of affine transformations, this means that only skewing, rotation and translation was allowed. We also added zooming, but we had to correct our volume labels when doing so! This helped to make the distirbution of labels more diverse.

Another augmentation here came in the form of shifting the images over the time axis. While systole was often found in the beginning of a sequence, this was not always the case. Augmenting this, by rolling the image tensor over the time axis, made the resulting model more robust against this noise in the dataset, while providing even more augmentation of our data.

Data augmentation was applied during the training phase to increase the number of training examples. We also applied the augmentations during the testing phase, and averaged predictions across the augmented versions of the same data sample.

Network architectures

We used convolutional neural networks to learn a mapping from the extracted image patches to systolic and diastolic volumes. During the competition, we played around a lot with both minor and major architectural changes. Our base architecture for most of our models was based on VGG-16.

As we already mentioned, we trained different models which can deal with different kinds of patients. There are roughly four different kinds of models we trained: single slice models, patient models, 2Ch models and 4Ch models.

Single slice models

Single slice models are models that take a single SAX slice as an input, and try to predict the systolic and diastolic volumes directly from it. The 30 frames were fed to the network as 30 different input channels. The systolic and diastolic networks shared the convolutional layers, but the dense layers were separated. The output of the network could be either a 600-way softmax (followed by a cumulative sum), or the mean and standard deviation of a Gaussian (followed by a layer computing the cdf of the Gaussian).

Although these models obviously have too little information to make a decent volume estimation, they benefitted hugely from test-time augmentation (TTA). During TTA, the model gets slices with different augmentations, and the outputs are averaged across augmenations and slices for each patient. Although this way of aggregating over SAX slices is suboptimal, it proved to be very robust to the relative positioning of the SAX slices, and is as such applicable to all patients.

Our single best single slice model achieved a local validation score of 0.0157 (after TTA), which was a reliable estimate for the public leaderboard score for these models. The approximate architecture of the slice models is shown on the following figure.

2Ch and 4Ch models

These models have a much more global view on the left ventricle of the heart than single SAX slice models. The 2Ch models also have the advantage of being applicable to every patient. Not every patient had a 4Ch slice. We used the same VGG-inspired architecture for these models. Individually, they achieved a similar validation score (0.0156) as was achieved by averaging over multiple sax slices. By ensembling only single slice, 2Ch and 4Ch models, we were able to achieve a score of 0.0131 on the public leaderboard.

Patient models

As opposed to single slice models, patient models try to make predictions based on the entire stack of (up to 25) SAX slices. In our first approaches to these models, we tried to process each slice separately using a VGG-like single slice network, followed by feeding the results to an overarching RNN in an ordered fashion. However, these models tended to overfit badly. Our solution to this problem consists of a clever way to merge predictions from multiple slices. Instead of having the network learn how to compute the volume based on the results of the individual slices, we designed a layer which combines the areas of consecutive cross-sections of the heart using a truncated cone approximation.

Basically, the slice models have to estimate the area (and standard deviation thereof) of the cross-section of the heart in a given slice . For each pair of consecutive slices and , we estimate the volume of the heart between them as , where is the distance between the slices. The total volume is then given by .

Ordering the SAX slices and finding the distance between them was achieved through looking at the SliceLocation metadata fields, but this field was not very reliable in finding the distance between slices, neither was the SliceThickness. We looked for the two slices that were furthest apart, drew a line between them, and projected every other slice onto this line. This way, we estimated the distance between two slices ourselves.

Our best single model achieved a local validation score of 0.0105 using this approach. This was no longer a good leaderboard estimation, since our local validation set contained relatively few outliers compared to the public leaderboard in the first round. The model had the following architecture:

Layer Type Size Output shape
Input layer   (8, 25, 30, 64, 64)*
Convolution 128 filters of 3x3 (8, 25, 128, 64, 64)
Convolution 128 filters of 3x3 (8, 25, 128, 64, 64)
Max pooling   (8, 25, 128, 32, 32)
Convolution 128 filters of 3x3 (8, 25, 128, 32, 32)
Convolution 128 filters of 3x3 (8, 25, 128, 32, 32)
Max pooling   (8, 25, 128, 16, 16)
Convolution 256 filters of 3x3 (8, 25, 256, 16, 16)
Convolution 256 filters of 3x3 (8, 25, 256, 16, 16)
Convolution 256 filters of 3x3 (8, 25, 256, 16, 16)
Max pooling   (8, 25, 256, 8, 8)
Convolution 512 filters of 3x3 (8, 25, 512, 8, 8)
Convolution 512 filters of 3x3 (8, 25, 512, 8, 8)
Convolution 512 filters of 3x3 (8, 25, 512, 8, 8)
Max pooling   (8, 25, 512, 4, 4)
Convolution 512 filters of 3x3 (8, 25, 512, 4, 4)
Convolution 512 filters of 3x3 (8, 25, 512, 4, 4)
Convolution 512 filters of 3x3 (8, 25, 512, 4, 4)
Max pooling   (8, 25, 512, 2, 2)
Fully connected (S/D) 1024 units (8, 25, 1024)
Fully connected (S/D) 1024 units (8, 25, 1024)
Fully connected (S/D) 2 units (mu and sigma) (8, 25, 2)
Volume estimation (S/D)   (8, 2)
Gaussian CDF (S/D)   (8, 600)

* The first dimension is the batch size, i.e. the number of patients, the second dimension is the number of slices. If a patient had fewer slices, we padded the input and omitted the extra slices in the volume estimation.

Oftentimes, we did not train patient models from scratch. We found that initializing patient models with single slice models helps against overfitting, and severely reduces training time of the patient model.

The architecture we described above was one of the best for us. To diversify our models, some of the good things we tried include:

  • processing each frame separately, and taking the minimum and maximum at some point in the network to compute systole and diastole
  • sharing some of the dense layers between the systole and diastole networks as well
  • using discs to approximate the volume, instead of truncated cones
  • cyclic rolling layers
  • leaky RELUs
  • maxout units

One downside of the patient model approach was that these models assume that SAX slices nicely range from one end of the heart to the other. This was trivially not true for patients with very few (< 5) slices, but it was harder to detect automatically for some other outlier cases as in figure below, where something is wrong with the images or the ROI algorithm fails.

sax_12 sax_15 sax_17 sax_36 sax_37 sax_41
2Ch 4Ch

Training and ensembling

Error function. At the start of the competition, we experimented with various error functions, but we found optimising CRPS directly to work best.

Training algorithm. To train the parameters of our models, we used the Adam update rule (Kingma and Ba).

Initialization. We initialised all filters and dense layers orthogonally (Saxe et al.). Biases were initialized to small positive values to have more gradients at the lower layer in the beginning of the optimization. At the Gaussian output layers, we initialized the biases for mu and sigma such that initial predictions of the untrained network would fall in a sensible range.

Regularization. Since we had a low number of patients, we needed considerable regularization to prevent our models from overfitting. Our main approach was to augment the data and to add a considerable amount of dropout.


Since the trainset was already quite small, we kept the validation set small as well (83 patients). Despite this, our validation score remained pretty close to the leaderboard score. Also, in cases where it didn’t, it helped us identify issues in our models, namely problematic cases in the test set which were not represented in our validation set. We noticed for instance that quite some of our patient models had problems with patients with too few SAX slices (< 5).

Selectively train and predict

By looking more closely at the validation scores, we observed that most of the accumulated error was obtained by wrongly predicting only a couple of such outlier cases. At some point, being able to handle only a handful of these meant the difference between a leaderboard score of 0.0148 and 0.0132!

To mitigate such issues, we set up our framework such that each individual model could choose not to train on or predict a certain patient. For instance, models on patients’ SAX slices could choose not to predict patients with too few SAX slices, models which use the 4Ch slice would not predict for patients who don’t have this slice. We extended this idea further by developing expert models, which only trained and predicted for patients with either a small or a big heart (as determined by the ROI detection step). Further down the pipeline, our ensembling scripts would then take these non-predictions into account.

Ensembling and dealing with outliers

We ended up creating about 250 models throughout the competition. However, we knew that some of these models were not very robust to certain outliers or patients whose ROI we could not accurately detect. We came up with two different ensembling strategies that would deal with these kind of issues.

Our first ensembling technique followed the following steps:

  1. For each patient, we select the best way to average over the test time augmentations. Slice models often preferred a geometric averaging of distributions, whereas in general arithmetic averaging worked better for patient models.
  2. We average over the models by calculating each prediction’s KL-divergence from the average distribution, and the cross entropy of each single sample of the distribution. This means that models which are further away from the average distribution get more weight (since they are more certain). It also means samples of the distribution closer to the median-value of 0.5 get more weight. Each model also receives a model-specific weight, which is determined by optimizing these weights over the validation set.
  3. Since not all models predict all patients, it is possible for a model in the ensemble to not predict a certain patient. In this case, a new ensemble without these models is optimized, especially for this single patient. The method to do this is described in step 2.
  4. This ensemble is then used on every patient on the test-set. However, when a certain model’s average prediction disagrees too much with the average prediction of all models, the model is thrown out of the ensemble, and a new ensemble is optimized for this patient, as described in step 2. This meant that about ~75% of all patients received a new, ‘personalized’ ensemble.

Our second way of ensembling involves comparing an ensemble that is suboptimal, but robust to outliers, to an ensemble that is not robust to them. This approach is especially interesting, since it does not need a validation set to predict the test patients. It follows the following steps:

  1. Again, for each patient, we select the best way to average over the test time augmentations again.
  2. We combine the models by using a weighted average on the predictions, with the weights summing to one. These weights are determined by optimising them on the validation set. In case not all models provide a prediction for a certain patient, it is dropped for that patient and the weights of the other models are rescaled such that they again sum to one. This ensemble is not robust to outliers, since it contains patient models.
  3. We combine all 2Ch, 4Ch and slice models in a similar fashion. This ensemble is robust to outliers, but only contains less accurate models.
  4. We detect outliers by finding the patients where the two ensembles disagree the most. We measure disagreement using CRPS. If the CRPS exceeds a certain threshold for a patient, we assume it to be an outlier. We chose this threshold to be 0.02.
  5. We retrain the weights for the first ensemble, but omit the outliers from the validation set. We choose this ensemble to generate predictions for most of the patients, but choose the robust ensemble for the outliers.

Following this approach, we detected three outliers in the test set during phase one of the competition. Closer inspection revealed that for all of them either our ROI detection failed, or the SAX slices were not nicely distributed across the heart. Both ways of ensembling achieved similar scores on the public leaderboard. (0.0110)

Second round submissions

For the second round of the competition, we were allowed to retrain our models on the new labels (+ 200 patients). We were also allowed to plan two submissions. Of course, it was impossible to retrain all of our models during this single week. For this reason, we chose to only train our 44 best models, according to our ensembling scripts.

For our first submission, we splitted of a new validation set. The resulting models were combined using our first ensembling strategy.

For our second submission, we trained our models on the entire training set (i.e. there was no validation split). We assembled them using the second ensembling method. Since we had no validation set to optimise the weights of the ensemble, we computed the weights by training an ensemble on the models we trained with a validation split, and transferred them over.

Software and hardware

We used Lasagne, Python, Numpy and Theano to implement our solution, in combination with the cuDNN library. We also used PyCUDA for a few custom kernels. We made use of scikit-image for pre-processing and augmentation.

We trained our models on the NVIDIA GPUs that we have in the lab, which include GTX TITAN X, GTX 980, GTX 680 and Tesla K40 cards. We would like to thank Frederick Godin and Elias Vansteenkiste for lending us a few extra GPUs in the last week of the competition.


In this competition, we tried out different ways to preprocess data and combine information from different data sources, and thus, we learned a lot in this aspect. However, we feel that there is still a room for improvement. For example, we observed that most of our error still hails from a select group of patients. These include the ones for which our ROI extraction fails. In hindsight, hand-labeling the training data and training a network to do the ROI extraction would be a better approach, but we wanted to sidestep doing a lot of this kind of manual effort as much as possible. In the end, labeling the data would probably have been less time intensive.

UPDATE (March 23): the code is now available on GitHub:

          Artificial Intelligence Risk: Get Ready for AI-Powered Malware      Cache   Translate Page      
IBM’s DeepLocker PoC gives the industry a look at artificial intelligence risk--and what an attack produced with the help of deep neural networks will look like. - Source:
          NVIDIA DLSS Provides ‘Huge’ Performance Boost at 4K in Dauntless; Developer Talks About 1.0 and Console Release      Cache   Translate Page      

While raytracing may have been the star of the GeForce RTX reveal and showcase, there’s another side to the ‘reinvention of graphics’ that NVIDIA has been touting and it’s the one provided by the Tensor Cores. These specialized cores allow the new Turing based GPUs to perform much faster deep neural network processing than what […]

The post NVIDIA DLSS Provides ‘Huge’ Performance Boost at 4K in Dauntless; Developer Talks About 1.0 and Console Release by Alessio Palumbo appeared first on Wccftech.

          CNN-Based Signal Detection for Banded Linear Systems. (arXiv:1809.03682v1 [cs.IT])      Cache   Translate Page      

Authors: Congmin Fan, Xiaojun Yuan, Ying-Jun Angela Zhang

Banded linear systems arise in many communication scenarios, e.g., those involving inter-carrier interference and inter-symbol interference. Motivated by recent advances in deep learning, we propose to design a high-accuracy low-complexity signal detector for banded linear systems based on convolutional neural networks (CNNs). We develop a novel CNN-based detector by utilizing the banded structure of the channel matrix. Specifically, the proposed CNN-based detector consists of three modules: the input preprocessing module, the CNN module, and the output postprocessing module. With such an architecture, the proposed CNN-based detector is adaptive to different system sizes, and can overcome the curse of dimensionality, which is a ubiquitous challenge in deep learning. Through extensive numerical experiments, we demonstrate that the proposed CNN-based detector outperforms conventional deep neural networks and existing model-based detectors in both accuracy and computational time. Moreover, we show that CNN is flexible for systems with large sizes or wide bands. We also show that the proposed CNN-based detector can be easily extended to near-banded systems such as doubly selective orthogonal frequency division multiplexing (OFDM) systems and 2-D magnetic recording (TDMR) systems, in which the channel matrices do not have a strictly banded structure.

          Analysis and Structure Optimization of Radial Halbach Permanent Magnet Couplings for Deep Sea Robots      Cache   Translate Page      
Permanent magnet couplings (PMCs) can convert the dynamic seal of transmission shaft into a static seal, which will significantly improve the transmission efficiency and reliability. Therefore, the radial Halbach PMC in this paper is suitable as the transmission mechanism of deep sea robots. A two-segment Halbach array is adopted in the radial PMC, and the segment arc coefficient can be adjustable. This paper presents the general analytical solutions of the distinctive Halbach PMCs based on scalar magnetic potential and Maxwell stress tensor. The analytical solutions of magnetic field are in good agreement with 2-D finite element analysis (FEA) results. In addition, an initial prototype of the radial Halbach PMC has been fabricated, and the analytical solutions of magnetic torque are compared with 3-D FEA and experiment results. This paper also establishes an optimization procedure for PMCs based on the combination of 3-D FEA, Back Propagation Neural Network (BPNN), and Genetic Algorithm (GA). 3-D FEA is performed to calculate the pull-out torque of the samples from Latin hypercube sampling, then BPNN is used to describe the relationship between the optimization variables and pull-out torque. Finally, GA is applied to solve the optimization problem, and the optimized scheme is proved to be more reasonable with the FEA method.
          Data-driven Discovery of Closure Models. (arXiv:1803.09318v3 [math.DS] UPDATED)      Cache   Translate Page      

Authors: Shaowu Pan, Karthik Duraisamy

Derivation of reduced order representations of dynamical systems requires the modeling of the truncated dynamics on the retained dynamics. In its most general form, this so-called closure model has to account for memory effects. In this work, we present a framework of operator inference to extract the governing dynamics of closure from data in a compact, non-Markovian form. We employ sparse polynomial regression and artificial neural networks to extract the underlying operator. For a special class of non-linear systems, observability of the closure in terms of the resolved dynamics is analyzed and theoretical results are presented on the compactness of the memory. The proposed framework is evaluated on examples consisting of linear to nonlinear systems with and without chaotic dynamics, with an emphasis on predictive performance on unseen data.

          Price Drop: Resize 2x (Photography)      Cache   Translate Page      

Resize 2x 1.3

Device: iOS Universal
Category: Photography
Price: $1.99 -> $.99, Version: 1.3 (iTunes)


This app will increase the resolution of your images twofolds directly in your device.

No pixilation, blurriness. No noise, glitches or artefacts!

This app uses a very advanced algorithm for achieving the best possible quality when resizing images. (Deep Convolutional Neural Network)

Best use for resizing anime images, drawings and photo edits!

Also this app will fix your image ratio using an image deformable sections algorithm. Just select the details and the ratio and the image will be resized without loss of information.

What's New

It solves a serious problem with the image picker, as it was not listing some media albums.

Resize 2x

          Live Coverage of Apple's iPhone XS, iPhone XR, and Apple Watch Event      Cache   Translate Page      
Apple's "Gather round" event at the Steve Jobs Theater at Apple Park begins at 10:00 a.m. Pacific Time, where it is widely expected to unveil three new iPhones (XS, XS Max, and XR) as well as new, slightly larger Apple Watch models.

Steve Jobs Theater via Apple CEO Tim Cook

We should also be hearing final details and the official release date for iOS 12, and likely macOS Mojave, watchOS 5, and tvOS 12 as well. And, of course, there may be other announcements and surprises in the cards.

Apple itself leaked a number of details this morning by prematurely updating its store site maps, and we have a summary of what's been revealed right here.

Apple is providing a live video stream on its website, via the Apple Events app on Apple TV, and on Twitter. We've shared instructions on how to watch along with a list of when the keynote starts in time zones around the world.

In addition to Apple's video stream, we will be updating this article with live blog coverage—no need to refresh—and issuing Twitter updates through our @MacRumorsLive account as the keynote unfolds.

Highlights from the event and separate news stories regarding today's announcements will go out through our @MacRumors account.

Sign up for our newsletter to keep up with Apple news and rumors.

Apple's online store is currently down in advance of the event. It should be accessible again shortly after the keynote.

Live blog in chronological order is after the jump.

8:26 am: Members of the press are checking in and milling around waiting to be admitted to the theater. As usual, Apple is providing light breakfast options.

8:31 am: Apple staff members currently blocking access to the actual theater.

9:15 am: The crowd is continuing to gather. Apple should be opening up the theater shortly.

9:22 am: Apple has opened the stairs, and members of the press are now heading down to the theater.

9:56 am: Five minutes to showtime!

10:01 am: Event is getting started with a video showing people heading down to the theater.

10:03 am: Mission: Impossible themed video showing someone rushing a last-minute item from Apple Park to the theater. Kevin Lynch beams himself directly to the theater using his watch.

10:04 am: Apple employee's badge denied access to backstage at the theater, but Kevin Lynch materializes to let her in. Tim Cook opens the briefcase and takes out his presentation clicker.

10:06 am: Cook giving an intro. We've reinvented several product categories, retail, etc. Showing Apple Piazza Liberty in Milan. Over 500 million visitors per year at Apple stores. We love that so many customers have the chance to experience our products there. We aim to put the customer at the center of everything we do.

10:07 am: We're about to ship our 2 billionth iOS device. iOS has changed the way we live, learn, work. It's changed entertainment, how we shop, how we stay in touch with each other. It's only fitting that today we're going to tell you about two of our most personal products. The ones that go with you everywhere.

10:07 am: Starting with Apple Watch.

10:09 am: This category didn't even exist a few years ago. Apple Watch is now the #1 watch, period. It's redefined what a watch can do for you. It's become indispensable for millions of people around the world. Jeff Williams on stage.

10:10 am: Apple Watch is becoming indispensable in three areas: communication, fitness, and health. Talking about heart rate monitoring..."an intelligent guardian for your health."

10:10 am: We're taking Apple Watch to the next level in all of these areas.

10:11 am: Intro video for next generation of Apple Watch.

10:12 am: Apple Watch Series 4. Everything about it has been redesigned and reengineered. It's just beautiful.

10:13 am: Stunning new display pushed right to the corners. Screens are significantly larger...over 30% larger. Minimal size increase, but it's thinner, so volume is actually less.

10:14 am: Brand-new watch face with up to 8 complications. Customize with the things you care about. Add loved ones and tap their faces to connect. Track time zones. Create the ultimate health and fitness watch. Modular face also redesigned with more detail from stock and third-party apps.

10:14 am: Breathe app is now available as a watch face. Raise your wrist, and it will guide you through a breathing sessions. Three breathe faces available.

10:15 am: New fire, water, and vapor faces offer dynamic visuals behind the watch hands.

10:16 am: Digital crown reengineered with haptic feedback. Speaker is 50% louder, which is great for phone calls, Walkie-Talkie, and Siri. Back is made entirely of black ceramic and sapphire crystal. Radio waves can now pass through front and back for improved cellular reception.

10:17 am: Series 4 is just as impressive on the inside. S4 package with 64-bit dual-core processor up to 2x faster.

10:18 am: Accelerometer and gyroscope have 2x the dynamic range with 8x faster sampling and up to 32 g-forces.

10:19 am: Apple Watch Series 4 can automatically detect falls. We did studies with thousands of people and captured data on real-world falls. There are repeatable motions involved to falls, trips, and slips, and the watch can detect them. It can then alert you and offer an Emergency SOS call. Will start call automatically if you're immobile for a minute after the fall.

10:21 am: Optical heart sensor has been integral since the beginning. Count calories, measure heart rate, and high heart rate notifications. Announcing three new heart features. First, low heart rate notification. Low heart rate can be a sign of something serious if not enough blood being pumped.

10:21 am: Second, Apple Watch can screen heart rhythm in the background and can notify you if it detects atrial fibrillation.

10:22 am: Third feature is with a new electrocardiogram (ECG) sensor in the back of the watch. First over-the-counter ECG product offered directly to consumers.

10:23 am: Can take an ECG anytime, anywhere. Open the app, and put your finger on the digital crown. Takes 30 seconds and gives you a heart rhythm classifications...sinus rhythm, atrial fibrillation. Results all stored and can be shared as a PDF with your doctor.

10:25 am: Ivor Benjamin, president of the American Heart Association, on stage. Applauds Apple's commitment to health. People often report symptoms that are absent during a doctor's visit. On-demand ECG is game-changing, especially for atrial fibrillation, which can increase risk of stroke, heart failure, and other complications.

10:27 am: Jeff Williams back on stage. It's great to have the support of the AHA, and we've also received clearance from the FDA. This is the first of its kind. Also, the irregular heart rhythm feature has received FDA clearance. Both features will be available in the U.S. later this year.

10:27 am: It's amazing to think the watch you wear everyday can now take an ECG.

10:28 am: Your personal data remains protected. You should decide who gets to see it. All data is encrypted on device and in the cloud.

10:29 am: Recapping features of the Apple Watch Series 4. You're probably wondering about battery life. Same 18-hour battery life customers have become used to. Increased outdoor workout battery life to six hours.

10:29 am: Showing another Apple Watch video. Jony Ive discussing the redesign and reengineering.

10:31 am: We've developed and refined the form, also making it thinner. New display is seamlessly integrated. The interface has been redesigned for the new display...more information with richer detail. Navigating with digital crown has been entirely reengineered with haptic feedback. New Apple-designed electrical sensor for ECG...a momentous achievement for a wearable device.

10:32 am: Accelerometer, gyroscope, and altimeter give you the ability to track more. Cellular capabilities give you more freedom. Series 4 is a device so powerful, personal, can change the way you live each day.

10:33 am: Jeff Williams back on stage. Series 4 available in silver. gold, and space gray aluminum. Stainless steel is even more beautiful...silver, space gray, and gold. All band styles fit all generations of Apple Watch.

10:34 am: Nike+ has been optimized with full-screen interface. Nike Sport Loops have reflective yarn for better visibility. New Hermes models.

10:34 am: Series 4 starts at $399, cellular at $499. Series 3 sticking around starting at $279.

10:35 am: Order starting Friday, available September 21. Series 3 at new prices available right after the show. watchOS 5 available September 17. That's Apple Watch, and now back to Tim.

10:36 am: Showing Apple Watch Series 4 commercial set to the Hokey Pokey.

10:37 am: We love what Apple Watch is doing to get the world moving. Now let's talk about iPhone.

10:38 am: Recapping iPhone X technology and capabilities. So many technologies, all powered by most advanced mobile operating system. iPhone became number one smartphone in the world. Also most loved with 98% customer satisfaction.

10:38 am: Today we're going to take iPhone X to the next level. By far the most advanced smartphone we've ever created. Showing intro video of new iPhone with gold finish.

10:39 am: This is iPhone Xs. Phil Schiller coming up to tell you about it.

10:40 am: It is made of surgical grade stainless steel. Gorgeous new gold finish. Most beautiful iPhone we've ever made. Most durable glass ever in a smartphone. Three finishes: gold, silver, space gray. IP68 protects against water up to 2 meters up to 30 minutes.

10:41 am: Tested in many different liquids, even beer. Some of the most fun, intense testing we get to do.

10:41 am: 5.8" Super Retina OLED display. Plus-size display in a smaller design. So many customers love that. And it looks incredible.

10:43 am: 60% greater dynamic range in display. Not just one, but two sizes available. New 6.5" model. 2688x1242. Same size phone as current Plus size, but bigger display. What's bigger than plus size? iPhone Xs Max.

10:44 am: HDR displays, 120 Hz touch sensing, 3D Touch, tap to wake, True Tone, Wide color.

10:44 am: Stereo sound better than any iPhone to date. Wider field...great for movies, games, and music.

10:45 am: Face ID is a huge step forward. So much technology in that little space. Designed with multiple neural networks so it's secure and seamless. On iPhone Xs has faster algorithms and faster Secure Enclave. Most secure facial authentication ever in a smartphone.

10:46 am: Powering Face ID is our A-series chip. What the team has done is truly breakthrough. A12 Bionic. Industry's first 7nm chip.

10:48 am: Packed with 6.9 billion transistors. 6-core CPU, 4-core GPU, neural engine. CPU has 2 high performance and 4 high efficiency cores. GPU is up to 50% faster. Real blow away thing is the neural engine. 8-core dedicated machine learning engine with smart compute to determined where to run a task.

10:48 am: A11 could process 600 billion operations per second. A12 can process 5 trillion.

10:49 am: Next-generation image signal processor, HEVC encoder/decoder, faster memory controller for up to 512 GB of storage.

10:49 am: A12 Bionic without question the smartest and most powerful in a smartphone. Enables so many great experiences that weren't possible before.

10:52 am: Apps launch up to 30% faster on A12 Bionic. Apps and processes that rely on machine learning...we've used it for years, but what's remarkable this year is it unlocks the power of real-time machine learning. Portrait mode, Animojis, immersive AR, and new Clips app coming this fall will all benefit.

10:53 am: Siri Shortcuts will let you get more done. Demoing "keynote day" shortcut. Launches Home app scene, orders coffee, starts Apple Music playlists, gets directions.

10:54 am: Opening up neural engine to Core ML, which gets up to 9x faster on 1/10th the energy. Frees up GPU for other features like AR, which is another area we're focusing on this year.

10:55 am: ARKit 2, new Measure app, AR Quick Look brings items into the real-world with just a tap.

10:55 am: A12 Bionic enables next-generation apps. We've got three developers to briefly show you. Todd Howard from Bethesda Game Studios on stage for a demo.

10:56 am: I wrote my first game when I was 12 on an Apple II. It's amazing how far we've come. We can now start reaching for games that are more than simple diversions. Let's take a look at new Elder Scrolls game, Blades.

10:57 am: We can pull out all of the detail you'd usually miss from the light and the dark. Lighting can bounce off of the wall. Even reflect off of your sword. We can use stereo widening on the new iPhone to hear the forest around you without headphones.

10:58 am: What used to be limited to your living room is now available on your phone. We can pull off some incredible environments that just weren't possible before.

10:59 am: It's not just immersive, it transports you. Blades coming to iOS this fall. Available for pre-order now.

11:00 am: David Lee from Nex Team and NBA hall of famer Steve Nash on stage to talk about Homecourt.

11:01 am: New tool to revolutionize basketball training. Recognizes hoop and court automatically, then tracks makes and misses, drawn as an overlay on the court. Real-time player detection. Shot science...measures shooting form and more to analyze performance.

11:02 am: Gives players immediate feedback on form, release time, and more. As players train with real-time feedback, they gain muscle memory. Great for beginners and pros. Shipping as an update this fall.

11:03 am: Atli Mar from Directive Games on stage to introduce AR-generated arcade cabinet. And with multi-player AR, everybody can experience it together. Showing Galaga AR.

11:06 am: Now back to Phil. Talking about camera. You all know it's the most popular smartphone camera, and for great reason.

11:06 am: Showing a portrait mode photo that appeared on the cover of Time magazine.

11:09 am: You are going to be blown away with iPhone Xs dual camera system. 12MP wide-angle with all-new sensor, 12MP telephoto. Improved True Tone flash. On the front, TrueDepth 7MP camera has new sensor as well. Image signal processor works with CPU to set exposure, white balance, focus, noise reduction, etc. A12 Bionic does this better than before, but also connects it to the neural engine to do better face detection, facial landmarking with instant red-eye reduction, and better segmentation for portrait mode. 1 trillion operations per photo.

11:10 am: New Smart HDR feature. Takes HDR so much further. If subject is moving, A12 shoots a 4-frame buffer and interframes to bring out details. Plus a long exposure for better shadow detail. Automatically selects best parts of each and combines them for beautiful photos.

11:14 am: Showing examples of Smart HDR photos. Now showing breakthrough bokeh capabilities. New depth slider in portrait mode to adjust depth of field after taking the picture.

11:16 am: iPhone Xs has four microphones to record stereo sound. Showing demo video of cyclists.

11:17 am: Battery life: iPhone Xs has up to 30 minutes more than iPhone X. iPhone Xs Max up to 90 minutes longer than iPhone X.

11:19 am: Dual SIM capability...keep two phone numbers, two different plans, or travel with local plan. Dual SIM Dual Standby (DSDS): both lines are active, and whichever gets the call goes active. Uses eSIM, and software helps you easily keep track od which line is which. Needs carrier support, and we're working with many to roll it out this fall. Physical SIM and eSIM worldwide on Xs and Xs Max, except in China, where it's dual physical SIMs.

11:21 am: Recapping iPhone Xs features. Lisa Jackson on stage to talk environmental friendliness.

11:23 am: Apple now runs on 100% renewable energy. That includes Apple Park with solar panels and directed biogas. But also our data centers. People said it couldn't be done, but we did it. Now we're on to our next challenge...ending mining of materials. So let's take a look at material innovations in iPhone Xs. Now using recycled tin in the logic board. This prevents mining of over 10,000 tons of tin ore per year. Reducing use of traditional plastics and transitioning to recycled and bio-based.

11:25 am: Focus on durability. Everything back to the iPhone 5s runs iOS 12, and keeping devices longer is the best thing for the planet. And when you're done, we have Apple GiveBack. Bring it in or mail it in, and we'll assess it. Either give you value if it can be reused, or recycle it.

11:26 am: Phil back on stage. So iPhone Xs, iPhone Xs Max. They are stunning, best phones we've ever made. We want to reach as many customers as we can, so that's why we're excited to show you one more iPhone. Video time!

11:27 am: So excited to introduce you to the iPhone Xr. 7000 series aerospace grade aluminum. Incredible new finishes...white, black, blue, coral, yellow.

11:29 am: Even a Product Red one. All of these have IP67 protection against dust and liquids. Display is what strikes you. LCD, but for the first time goes edge to edge. Most advanced LCD ever in a smartphone...calling it Liquid Retina. 6.1" on the diagonal 1792x828, 326 ppi.

11:30 am: Tap to wake, 120 Hz touch sensitivity, True Tone, wide color, no home button. Same gestures as iPhone X. No 3D Touch, but new feature called Haptic touch. Face ID.

11:32 am: Faster Face ID algorithms, faster Secure Enclave. A12 Bionic chip with real-time machine learning. 12MP single camera. Same exact wide-angle camera as in Xs and Xs Max.

11:32 am: Can do portrait mode photos, showing examples. Same bokeh as Xs and Xs Max, and depth control.

11:34 am: TrueDepth camera with portrait selfies. Battery life 90 minutes longer than iPhone 8 Plus.

11:35 am: Huge day for iPhone...three new models. Showing a product video with Jony Ive.

11:38 am: iPhone Xs is completely uncompromising. iPhone Xs Max has the largest display ever on an iPhone. Custom developed stainless steel in three finishes including new gold. Better water and dust resistance, most durable glass ever on a smartphone. Face ID reinvents secure unlock, login, and payments. A12 Bionic is the smartest and most powerful chip ever in a smartphone. More advanced dual camera system and neural engine takes us to a new era of photography. Smart HDR gives us images like never before.

11:39 am: iPhone Xr integrates the same breakthrough technologies. Entirely new range of finishes. All-screen Liquid Retina display is most advanced and color accurate display in smartphone. Machine learning can recognize subjects, depth of field is adjustable, and more.

11:40 am: iPhone Xr in 64/128/256 GB options starting at $749. Pre-order October 19, shipping October 26.

11:41 am: iPhone Xs in 64/256/512 GB, starts at $999. iPhone Xs Max in same configurations starting at $1099. Pre-order September 14, ships September 21. Second wave comes just a week later on September 28. Fastest geographic rollout we've ever had.

11:41 am: iPhone 7 and 7 Plus from $449, iPhone 8 and 8 Plus from $599.

11:42 am: iOS 12 launches Monday.

11:43 am: Tim Cook back on stage with brief HomePod update. Talking about stereo pairing and AirPlay 2. Identify songs by lyrics, multiple timers, make and receive calls, find your iPhone. Apple TV to get Dolby Atmos. Updates coming Monday.

11:44 am: macOS Mojave coming September 24.

11:44 am: Tim is now recapping today's announcements.

11:46 am: Tim thanking everyone for watching, and everyone at Apple. Event is over.

Discuss this article in our forums

          MIT taught a neural network how to show its work      Cache   Translate Page      

MIT’s Lincoln Laboratory Intelligence and Decision Technologies Group yesterday unveiled a neural network capable of explaining its reasoning. It’s the latest attack on the black box problem, and a new tool for combating biased AI. Dubbed the Transparency by Design Network (TbD-net), MIT’s latest machine learning marvel is a neural network designed to answer complex questions about images. The network parses a query by breaking it down into subtasks that are handled by individual modules. If you asked it to determine the color of “the large square” in a picture showing several different shapes of varying size and color, for…

This story continues at The Next Web

          Samsung Opens Artificial Intelligence Research Centre In New York      Cache   Translate Page      

The smartphone industry owes a lot to AI(Artificial Intelligence) , especially when it comes to the new age revolutionary smartphones. And it seems like Samsung is keen to strengthen its hold on the field of AI to keep introducing better smartphones. Samsung Electronics Co has now launched its sixth AI research centre in the New York City. On Sunday, the tech giant announced that the research centre will mainly focus on robotics.

Samsung has Artificial Intelligence research centres in five countries including Russia, Canada, Britain, Silicon Valley in the US and its home country, South Korea. The new AI centre in New York will be led by Daniel D. Lee, Executive Vice President of Samsung Research. Lee, who joined the company last summer, is also a highly acclaimed AI robotics expert.

This move to open another AI research centre in New York seems to serve two purposes for Samsung. The company will now have access to a huge talent pool in New York, while being in close proximity with leading academic universities.

H. Sebastian Seung, a renowned neuroscientist and Chief Research Scientist of Samsung Electronics, will also be at the helm of the new research centre. Apart from this, Seung will help them identify areas for further growth opportunities.

Creating Value For Customers

The new research centre is aimed at creating better products by tapping into the unequivocal power of AI. This research centre, along with the other five centres, will enable the company to offer more value products to the customers. In fact, earlier this year the company affirmed its plan to expand its AI research efforts by soliciting the services of nearly 1000 AI specialists by 2020.

Samsung said that AI owes its wonderful achievements to technologies like neural networks. Further they  asserted their intention to be a unanimous part of this new era of AI.

Do these claims mean that Samsung is gearing up to prove its indisputable mettle in the field of Artificial Intelligence?

The post Samsung Opens Artificial Intelligence Research Centre In New York appeared first on iGyaan Network.

          Neural Network mà chip A12 Bionic của iPhone Xs hỗ trợ là gì? Nó giúp gì được cho bạn?      Cache   Translate Page      
          Google AI with Jeff Dean      Cache   Translate Page      

Jeff Dean, the lead of Google AI, is on the podcast this week to talk with Melanie and Mark about AI and machine learning research, his upcoming talk at Deep Learning Indaba and his educational pursuit of parallel processing and computer systems was how his career path got him into AI. We covered topics from his team’s work with TPUs and TensorFlow, the impact computer vision and speech recognition is having on AI advancements and how simulations are being used to help advance science in areas like quantum chemistry. We also discussed his passion for the development of AI talent in the content of Africa and the opening of Google AI Ghana. It’s a full episode where we cover a lot of ground. One piece of advice he left us with, “the way to do interesting things is to partner with people who know things you don’t.”

Listen for the end of the podcast where our colleague, Gabe Weiss, helps us answer the question of the week about how to get data from IoT core to display in real time on a web front end.

Jeff Dean

Jeff Dean joined Google in 1999 and is currently a Google Senior Fellow, leading Google AI and related research efforts. His teams are working on systems for speech recognition, computer vision, language understanding, and various other machine learning tasks. He has co-designed/implemented many generations of Google’s crawling, indexing, and query serving systems, and co-designed/implemented major pieces of Google’s initial advertising and AdSense for Content systems. He is also a co-designer and co-implementor of Google’s distributed computing infrastructure, including the MapReduce, BigTable and Spanner systems, protocol buffers, the open-source TensorFlow system for machine learning, and a variety of internal and external libraries and developer tools.

Jeff received a Ph.D. in Computer Science from the University of Washington in 1996, working with Craig Chambers on whole-program optimization techniques for object-oriented languages. He received a B.S. in computer science & economics from the University of Minnesota in 1990. He is a member of the National Academy of Engineering, and of the American Academy of Arts and Sciences, a Fellow of the Association for Computing Machinery (ACM), a Fellow of the American Association for the Advancement of Sciences (AAAS), and a winner of the ACM Prize in Computing.

Cool things of the week
  • Google Dataset Search is in beta site
  • Expanding our Public Datasets for geospatial and ML-based analytics blog
    • Zip Code Tabulation Area (ZCTA) site
  • Google AI and Kaggle Inclusive Images Challenge site
  • We are rated in the top 100 technology podcasts on iTunes site
  • What makes TPUs fine-tuned for deep learning? blog
  • Jeff Dean on Google AI profile
  • Deep Learning Indaba site
  • Google AI site
  • Google AI in Ghana blog
  • Google Brain site
  • Google Cloud site
  • DeepMind site
  • Cloud TPU site
  • Google I/O Effective ML with Cloud TPUs video
  • Liquid cooling system article
  • DAWNBench Results site
  • Waymo (Alphabet’s Autonomous Car) site
  • DeepMind AlphaGo site
  • Open AI Dota 2 blog
  • Moustapha Cisse profile
  • Sanjay Ghemawat profile
  • Neural Information Processing Systems Conference site
  • Previous Podcasts
    • GCP Podcast Episode 117: Cloud AI with Dr. Fei-Fei Li podcast
    • GCP Podcast Episode 136: Robotics, Navigation, and Reinforcement Learning with Raia Hadsell podcast
    • TWiML & AI Systems and Software for ML at Scale with Jeff Dean podcast
  • Additional Resources
    • site
    • Chris Olah blog
    • Distill Journal site
    • Google’s Machine Learning Crash Course site
    • Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville book and site
    • NAE Grand Challenges for Engineering site
    • Senior Thesis Parallel Implementations of Neural Network Training: Two Back-Propagation Approaches by Jeff Dean paper and tweet
    • Machine Learning for Systems and Systems for Machine Learning slides
Question of the week

How do I get data from IoT core to display in real time on a web front end?

  • Building IoT Applications on Google Cloud video
  • MQTT site
  • Cloud Pub/Sub site
  • Cloud Functions site
  • Cloud Firestore site
Where can you find us next?

Melanie is at Deep Learning Indaba and Mark is at Tokyo NEXT. We’ll both be at Strangeloop end of the month.

Gabe will be at Cloud Next London and the IoT World Congress.

          Artificial intelligence risk: get ready for AI-powered malware      Cache   Translate Page      
IBM’s DeepLocker PoC gives the industry a look at artificial intelligence risk--and what an attack produced with the help of deep neural networks will look like.
"Never send a human to do a machine's job."
          Cross-Modal Scene Networks      Cache   Translate Page      
People can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal scene dataset. While convolutional neural networks can categorize scenes well, they also learn an intermediate representation not aligned across modalities, which is undesirable for cross-modal transfer applications. We present methods to regularize cross-modal convolutional neural networks so that they have a shared representation that is agnostic of the modality. Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval. Moreover, our visualizations suggest that units emerge in the shared representation that tend to activate on consistent concepts independently of the modality.
          MIT taught a neural network how to show its work      Cache   Translate Page      
MIT taught a neural network how to show its work#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000 MIT’s Lincoln Laboratory Intelligence and Decision Technologies Group yesterday unveiled a neural network capable of explaining its reasoning. It’s the latest attack on the black box problem, and a new tool for combating biased AI. Dubbed the Transparency by Design Network (TbD-net), ... Reported by The Next Web 5 hours ago.
          KDnuggets™ News 18:n34, Sep 12: Essential Math for Data Science; 100 Days of Machine Learning Code; Drop Dropout      Cache   Translate Page      
Also: Neural Networks and Deep Learning: A Textbook; Don't Use Dropout in Convolutional Networks; Ultimate Guide to Getting Started with TensorFlow.
          The Los Angeles Philharmonic Presents WDCH DREAMS Celebrating The LA Phil's 100th Anniversary      Cache   Translate Page      

The Los Angeles Philharmonic has commissioned award-winning media artist Refik Anadol to create unprecedented, breathtaking, three-dimensional projections onto the steel exterior of Walt Disney Concert Hall to signal the commencement of the LA Phil's 100-year anniversary celebrations. Free and open to the public, nightly performances are scheduled to occur every half hour, with the first performance at 7:30 p.m., and the last at 11:30 p.m., September 28 to October 6.

To make Walt Disney Concert Hall "dream," Anadol utilized a creative, computerized "mind" to mimic how humans dream - by processing memories to form a new combination of images and ideas. To accomplish this, Anadol worked with the Artists and Machine Intelligence program at Google Arts and Culture to apply machine intelligence to the orchestra's digital archives - nearly 45 terabytes of data - 587,763 image files, 1,880 video files, 1,483 metadata files, and 17,773 audio files (the equivalent of 40,000 hours of audio from 16,471 performances). The files were parsed into millions of data points that were then categorized by hundreds of attributes, by deep neural networks with the capacity to both remember the totality of the LA Phil's "memories" and create new connections between them. This "data universe" is Anadol's material, and machine intelligence is his artistic collaborator. Together, they create something new in image and sound by awakening the metaphorical "consciousness" of Walt Disney Concert Hall. The result is a radical visualization of the organization's first century and an exploration of synergies between art and technology, and architecture and institutional memory.

To actualize this vision, Anadol is employing 42 large scale projectors, with 50K visual resolution, 8-channel sound, and 1.2M luminance in total. The resulting patterns, or "data sculptures" formed by the machine's interpretation of the archives will be displayed directly onto the undulating stainless-steel exterior of Walt Disney Concert Hall.

WDCH Dreams' accompanying soundtrack was created from hand-picked audio from the LA Phil's archival recordings. Sound designers Parag K. Mital, Robert Thomas, and Kerim Karaoglu augmented these selections by using machine-learning algorithms to find similar performances recorded throughout the LA Phil's history, creating a unique exploration of historic audio recordings. Viewers can access the soundtrack at the LA Phil's website (

Inside Walt Disney Concert Hall, in the Ira Gershwin Gallery, is an immersive and interactive companion installation, offering a unique, one-on-one experience for each gallery visitor. The exhibition presents the entire LA Phil digital archives in a non-linear fashion. The visitor, via a touchscreen interface, can interact with the archives in multiple ways: via a sunburst timeline; through curated moments highlighting milestones in the LA Phil's 100-year history; and by delving into to the entire data universe that can be uniquely manipulated by each gallery visitor. The space will be re-imagined as a mirrored U-shaped room with two-channel projection. Visuals will be projected onto the mirrored surface giving the visitor a truly immersive, 360-degree experience.

The Ira Gershwin Gallery opens to the public on September 28 and will remain open throughout the Centennial season. Visitors can reserve times to access the gallery via The Music Center's self-guided tour schedule weekdays from 10am - 3pm. Ticket holders to any concert during the Centennial season can access the gallery 90 minutes prior to performances.

As a part of the Centennial celebrations, the LA Phil will make a selection of its archives, and online exhibitions, available on Google Arts & Culture, exploring how WDCH Dreams was made, with behind-the-scenes footage and a short film about the development of the project.

For more information about WDCH Dreams, please visit:

          MIT taught a neural network how to show its work      Cache   Translate Page      

MIT’s Lincoln Laboratory Intelligence and Decision Technologies Group yesterday unveiled a neural network capable of explaining its reasoning. It’s the latest attack on the black box problem, and a new tool for combating biased AI. Dubbed the Transparency by Design Network (TbD-net), MIT’s latest machine learning marvel is a neural network designed to answer complex questions about images. The network parses a query by breaking it down into subtasks that are handled by individual modules. If you asked it to determine the color of “the large square” in a picture showing several different shapes of varying size and color, for…

This story continues at The Next Web
          Data Scientist - eBay Inc. - Austin, TX      Cache   Translate Page      
GBM, logistic regression, clustering, neural networks, NLP Strong analytical skills with good problem solving ability eBay is a Subsidiary of eBay....
From eBay Inc. - Sat, 25 Aug 2018 08:05:35 GMT - View all Austin, TX jobs
          Data Scientist - eBay Inc. - Austin, TX      Cache   Translate Page      
GBM, logistic regression, clustering, neural networks, NLP Strong analytical skills with good problem solving ability eBay is a Subsidiary of eBay....
From eBay Inc. - Sat, 25 Aug 2018 08:05:35 GMT - View all Austin, TX jobs
          Data Scientist - eBay Inc. - Austin, TX      Cache   Translate Page      
GBM, logistic regression, clustering, neural networks, NLP Strong analytical skills with good problem solving ability eBay is a Subsidiary of eBay....
From eBay Inc. - Sat, 25 Aug 2018 08:05:35 GMT - View all Austin, TX jobs
          Audio Signal Processing Engineer - GuruLink - Montréal, QC      Cache   Translate Page      
Support Vector Machines, Hidden Markov Models, Deep Neural Network architectures); Audio Signal Processing Engineer....
From GuruLink - Tue, 04 Sep 2018 04:37:30 GMT - View all Montréal, QC jobs
          Episode 143: Ostensibly Helpful, But Actually Dangerous      Cache   Translate Page      
This week Dave ( and Gunnar ( talk about things that are ostensibly helpful, but actually dangerous: robotic tutors, voice modulators, autocomplete, and the hellscape of Android VPN apps Creeper sauce ( is back! Gunnar can’t wait for the delivery of his Tom Bihn Tristar ( Human vs. robot ping pong ( Hushme Lets You Talk On The Phone Privately While Pretending To Be Bane ( Researchers Issue Security Warnings About Several Popular Android VPN Apps ( The browser setting everyone should turn off now ( Is The Future Of Television Watching on Fast-Forward? ( Network Television Stations Speed Up TV Shows to Fit in More Ads ( Couch to 5K (, RunKeeper (, and the value of chains Cutting Room Floor * Recreating Asteroids with open source and a laser projector ( * We can now 3D print Slinkys ( * Robot Solves Sudoku on Paper ( * AI Move Poster Generator ( * Create Hilarious Fake Inspirational Messages With InspiroBot ( * New paint colors invented by neural network ( * Metal band names invented by neural network ( * Neural networks can name guinea pigs ( * Princeton students after a freshman vs. sophomores snowball fight, 1893 ( * A Virtual Machine, in Google Sheets ( We Give Thanks * The D&G Show Slack Clubhouse for the discussion topics!
          Episode 131: #131: Send In the Clowns      Cache   Translate Page      
Cutting Room Floor We Give Thanks
  • The D&G Show Slack Clubhouse for the discussion topics!

          Episode 104: #104: Tamper Evident      Cache   Translate Page      

This week Dave and Gunnar talk about: credit card vulnerabilities, Dell vulnerabilities, and whether programmers are engineers.

Cutting Room Floor

We Give Thanks

  • The D&G Show Slack Clubhouse for the discussion topics!
          09/12 Links Pt2: Soviet Antisemitism in a British Guise; Interview with Colonel Richard Kemp; Artist turns Kassam rockets into flowers and mezuzahs      Cache   Translate Page      
From Ian:

Soviet Antisemitism in a British Guise
Most of all, there was the Soviet practice of wheeling out “citizens of Jewish nationality” to denounce Zionism as a “racist” tool of “imperialism.” In March 1983, the Soviet news agency TASS even published a definition of Zionism drawn up by the state-run “Jewish Anti-Zionist Committee” that read as follows:

“In its essence, Zionism is a concentration of extreme nationalism, chauvinism, and racial intolerance, justification of territorial seizure and annexation, armed adventurism, a cult of political arbitrariness and impunity, demagogy and ideological sabotage, sordid maneuvers and perfidy.”

To my mind, the most obvious question here to Jeremy Corbyn, The Morning Star, and those of a similar pedigree, is this: Is there anything in this Soviet definition of Zionism that you disagree with? Make no mistake, the answer is critically important, because it is exactly this characterization of Zionism that grounded both the USSR’s domestic persecution of its Jewish community, and its international alignment with Arab regimes and terrorist groups.

If the answer is to disagree with this formulation — highly unlikely, given that Corbyn himself was present at dozens of left-wing political gatherings during the 1970s and ’80s where Soviet and Arab antisemitic literature was distributed — then it is a disingenuous one. Because when Corbyn and those in his camp speak and write about the triangle of Jews, Zionism, and Israel, these are the terms in which they think, and have always thought.

That is why Corbyn’s house journal uses terms like “embittered fifth column” to describe their leader’s Jewish opponents — also used by Valery Emelyanov, an official Soviet ideologue, in 1978 to describe the “internal danger” posed by Soviet Jews. It’s why they have no qualms about saying that Jewish leaders opposed to Corbyn have “tasted blood,” despite the associations with the antisemitic blood libel that such a metaphor unleashes; then again, Vladimir Begun, a particularly toxic Soviet antisemite, wrote with great enthusiasm of the “bloodthirstiness” that was inherent in “Zionist gangsterism.”

Given the number of occasions that Corbyn publicly defended the Soviet regime — “The Soviet Union makes far greater nursery provision than this country” (1984), “I do not believe that [the USSR] has ever intended to invade western Europe” (1990) — he was clearly well aware of Moscow’s stance on all the key international matters of the time, as well as its propaganda practices. That doesn’t make him a spy, but it does make him an ideological fellow-traveler. And as The Morning Star has demonstrated by defending Corbyn with an ugly rhetorical assault on British Jews, that Soviet-inspired journey rolls on.

Yisreal Medad: An Exercise in Deconstruction
Deconstruction is a literary term indicating "a critique of the relationship between text and meaning ".

I found this poem, "Everything in Our World Did Not Seem to Fit" by Naomi Shihab Nye here. It is an example of "new Palestinian poetry". Excuse me, "Arab Palestinian poetry". Ms. Nye's family roots are in Sinjil, just down the road from Shiloh where I live.

I realized that here poem is a literal work of deconstruction - of history, of Jewish national identity, of politics and of simple rational logic.

Let's deconstruct that literary work.

Once they started invading us.
Actually, the Arabs invaded Eretz-Yisrael in 638 CE. Moreover, despite the loss of political independence, Jews continued to reside in the Land of Israel, if in small numbers depending on the conditions and crcumstances of the various occupiers.

Taking our houses and trees, drawing lines, pushing us into tiny places.
Throughout the Zionist resettlement enterprise, almost all the land was purchased from its owners.

It wasn’t a bargain or deal or even a real war.
The Arab terror war against Jews in 1920, 1921, 1929, 1936-1939 and the 1947 war was real as were the fedyeen and the PLO's launching in 1964.
Interview with Colonel Richard Kemp
A United Nations report on recent violent clashes along the Israel-Gaza border hasn’t been written yet, but international terrorism expert Richard Kemp already knows it will condemn Israel for defending its borders against armed hordes.

Kemp, a retired British army officer who has watched, and fought against, terrorism around the world for 30 years, will tell an audience in Hamilton Thursday that’s the traditional response of a world community that doesn’t want to face up to terrorism.

In an interview ahead of his appearance, Kemp said the recent attacks on the Gaza border were just the latest phase of ongoing efforts by the terrorist group Hamas to smash the Jewish state.

It’s a campaign built around Hamas’ standard tactic of sprinkling its terrorists among civilians in the hope Israel’s response will result in civilian deaths that can, in turn, spark international outrage against the country.

“By creating a situation of violent disorder, breaking through the fence and attacking Israeli communities Hamas hope to provoke Israel in the hope that Israel’s reaction will result in many of (Hamas’) own people being killed,” he said.

Kemp said the tactic has worked to a degree – even fair-minded citizens around the world who understand a country’s right to defend itself are made uncomfortable by the sight of civilians being shot by Israeli soldiers.

“Even if people are against Israel, most sane people can understand that a country has to respond if it is attacked, if rockets are launched at it or attack tunnels are dug underneath it. Most people can accept that even if they don’t like it,” Kemp said.

“Even if they accept that they can’t understand how a civilized country like Israel can gun down people involved in peaceful demonstrations. We know they’re not peaceful demonstrations, but that is how it’s portrayed.”

New York Times Stumbles in a Strange Front-Page Antisemitism Story
A front-page New York Times news article appears under the headline “U.S. Revives Rutgers Bias Case In New Tack on Anti-Semitism.”

The Times article hypes what it describes as a “significant policy shift.” It claims that the federal education department and its assistant secretary, Kenneth Marcus, “put the weight of the federal government behind a definition of anti-Semitism that targets opponents of Zionism.”

It goes on to claim that the Education Department “adopted a hotly contested definition of anti-Semitism that included ‘denying the Jewish people the right to self-determination’ by, for example, ‘claiming that the existence of a State of Israel is a racist endeavor’ and ‘applying double standards by requiring of’ Israel ‘a behavior not expected or demanded of any other democratic nation.'”

It’s extremely strange that The New York Times would all of a sudden describe this particular definition of anti-Semitism as “hotly contested.” The Times itself, as recently as this month, published two articles describing the exact same definition as “internationally accepted.”

On September 4, the Times published a Reuters dispatch: “LONDON — Britain’s opposition Labour Party adopted an internationally accepted definition of anti-Semitism on Tuesday.” An Associated Press dispatch published by the Times the same day begins, “LONDON — Britain’s main opposition Labour Party on Tuesday adopted an internationally recognized definition of anti-Semitism.” On July 26, a Times-written article from London also referred to the same definition as “internationally accepted.”

Got that? When the British Labour Party adopts the definition, the Times describes it, accurately, as “internationally accepted.” Yet when the US government tries to enforce the definition on an American college campus, then all of a sudden the Times describes the definition as “hotly contested.” In fact, the definition is internationally accepted everywhere except in The New York Times newsroom, or at least in that portion of it responsible for the Rutgers article.
Sightless in Bethlehem
“Israel made me hate it” has long been a common theme among Jewish critics of Israel. In fact, New York Times columnist Thomas L. Friedman practically built his career out of it, winning a Pulitzer Prize for a book based on the claim that he was an uncritical supporter of Israel until he covered the 1982 Lebanon war, where he saw outrageous Israeli behavior that changed his mind.

That version of Friedman’s biography turned out to be a fabrication. In 1990, pro-Israel activists revealed that he actually had been a hostile critic of Israel all the way back to his days as a student at Brandeis University, in the early 1970s. He was the leader of a student group that publicly derided Israel’s Labor government for not negotiating with Yasir Arafat, and condemned the Jewish community for protesting against Arafat’s infamous speech at the United Nations.

But back in 1990, before there was an internet, it wasn’t hard for anyone to get away with misrepresenting his biography. The exposure of Friedman’s lies was buried, and he was soon promoted to the position of op-ed columnist for the Times, a perch from which he has bashed Israel on countless occasions.

Now it looks like a new generation is taking up where Friedman left off in the department of creative autobiographical writing. Case in point: Jacob Plitman, the 27 year-old editor of the far-left magazine Jewish Currents.

Recently, a major article in the New York Jewish Week featured Plitman, describing him as being at the center of an “emerging new Jewish left” that is more willing than its elders to criticize Israel. And how did Plitman get to be a critic of Israel? You guessed it—Israel made him hate it.

As a youngster, “his parents sent him to a Young Judea camp.” Plitman “soaked up its messaging on Israel and Zionism,” so much so that he decided to spend his gap year in Israel. And then came The Great Disillusionment.

“His rosy view of Zionism began to change,” the article reported, when Plitman “visited Bethlehem” (interesting choice for a Jewish tourist) “and came face to face with Palestinians for the first time. ‘The things that I saw there were more powerful than my ability to ignore them’,” Plitman told the Jewish Week. As a result, Plitman and his magazine now devote themselves to railing against “the occupation” and “the settlers.”

That’s his version. But I have my suspicions. Here’s why.

Israel among 'shameful' countries abusing human rights activists, according to U.N.
The United Nations on Wednesday listed Israel among 38 "shameful" countries, which it said had carried out reprisals or intimidation against people cooperating with it on human rights, through killings, torture and arbitrary arrests. Allegations of ill-treatment, surveillance, criminalisation, and public stigmatization campaigns targeting victims and human rights defenders were also included on the list.

Israel earned its spot on the list for it's ongoing legal battle against Human Rights Watch representative Omar Shakir, who's visa wasn't extended and who's deportation was ordered last May on the grounds of supporting the BDS movement. Interior minister Deri who ordered the deportation last May said he acted on the recommendation of the Strategic Affairs who had gathered information showing that Shakir “is an active and consistent supporter of boycotting Israel.”

Humans Right Watch challenged the decision and accused Israel of trying to silence criticism on its human right's record and that going after Shakir was an attempt to go after HRW as a whole.

Shakir remains in the country after the Jerusalem District Court backtracked on its original decision to go through with the deportation and the case remains under review.

The UN report did not highlight how it categorized the case against Shakir and why it would place Israel on the "shameful' list.
This Week in Julia Salazar She had a trust fund, her ancestors were Catholic elites, and she has a new version of her conversion story.
A state senate race that was once hailed as a test of the rising strength and power of insurgent socialists has devolved into a full-fledged New York City tabloid circus, featuring charges of lies, identify fraud, theft, and an affair with New York Mets legend Keith Hernandez.

And that was just last week.

Every time the life story of first-time state senate candidate Julia Salazar, 27, seems it can’t get any more convoluted, it does. First, questions were raised about her religious background and political affiliation, after it was revealed she grew up in a Christian family and was a registered Republican who led an anti-abortion group in college before running for office as a Jewish socialist. Then, her self-identification as an immigrant came under fire — she was born in Miami — and her own brother went to town on her claims that she is from a working-class background. Next came revelations of a complex legal dispute with Hernandez’s wife that had led to Salazar being arrested on identity-impersonation charges; she doggedly pursued a defamation countersuit that was ultimately settled in her favor. Amid that story, legal documents surfaced showing her lawyer pointing to “Ms. Salazar trust Account records showing in excess of $600,000” in assets in 2011 and therefore no incentive to steal from Hernandez’s wife.

Now the campaign has confirmed Salazar has had substantial assets held in trust for her. “Julia’s father, who played a very limited role in raising her after her parents’ divorce, was not able to work due to disability in the final years of his life, but on his death in 2009 he left a house and considerable retirement savings; those assets were put in a trust to be divided evenly between Julia and her brother,” campaign spokesman Michael Kinnucan said. “Julia does not have direct access to the trust; the trustee is a relative in Colombia.”

On September 13, Salazar will face off with incumbent Senator Martin Malavé Dilan; the primary winner is all but assured of victory in November.

Salazar’s bid, which has won endorsements from and/or appearances with Alexandria Ocasio-Cortez, Cynthia Nixon, and Nina Turner of Bernie Sanders’s Our Revolution, has even drawn attention in her father’s homeland of Colombia, where a genealogist’s findings further undermine her early campaign claims of having come from a working-class mixed Jewish-Christian family and raise questions about how aware of her own Colombian heritage she has sought to be, despite her repeated statements identifying with it. Meanwhile, interviews with the candidate and with Jewish religious leaders show her story about converting to Reform Judaism could not have happened as she has described it to multiple reporters, because the person she now says guided her conversion process was not an ordained rabbi and also was, in any case, not affiliated with Columbia University the year she initially said she converted through the Columbia/Barnard Hillel.

According to Maria Emilia Naranjo Ramos, a genealogist with the Colombian Academy of Genealogy and Historic Academy of Córdoba, the Salazars have for generations been a prosperous family in Colombia that has played a prominent role in civic and political life. Far from being the daughter of struggling immigrants of mixed Jewish-Catholic religious heritage, which early news reports described her as based on her statements and those of her campaign, Julia Salazar is the scion of longtime Latin-American Catholic elites.
Citizens Union drops endorsement of Julia Salazar, citing 'not correct' information about her academic credentials
A good government group has withdrawn its backing of Julia Salazar for state Senate, saying the candidate provided information about her academic credentials that proved to be incorrect.

Citizens Union had previously issued its “preference” — the term the group uses rather than endorsement — for Salazar, whose campaign has been plagued in recent weeks by revelations about misrepresentations of her past religious and political beliefs and her immigration status.

“Citizens Union is hereby rescinding the preference it expressed for Julia Salazar in the Democratic Primary for New York State Senate District 18,” Randy Mastro, the chair of the group, said in a statement. “Salazar recently admitted that the information she originally provided to Citizens Union about her academic credentials was not correct, so Citizens Union has decided to express no preference in this race.”

A campaign spokesman called it an “error in her endorsement application.”

"Julia regrets that an error in her endorsement application led to Citizens Union rescinding its endorsement, but remains committed to working with Citizens Union and others opposed to Albany corruption if elected to take money out of politics and clean up Albany,” he said.

Gillum Aligns With Groups That Support Boycotts of Israel
Florida gubernatorial candidate Andrew Gillum has aligned himself with several prominent anti-Semitic organizations known for promoting boycotts of Jewish goods and individuals, fueling questions about how the Democratic candidate would handle issues of import to the state's large pro-Israel community.

Gillum, who is riding a progressive a wave of young Democrats highly critical of Israel, is running against Rep. Ron DeSantis (R., Fla.), a prominent Israel supporter. The Democrat has a history of working with several organizations promoting the Boycott, Divestment, and Sanctions movement, or BDS, an anti-Semitic movement that seeks to wage economic and political warfare on the Jewish state.

Gillum's open association with these organizations is raising questions in the pro-Israel community, particularly as U.S. states seek to slash ties with BDS organizations and prevent taxpayer funds from supporting these movements. While Gillum has committed to "push back against anti-Israel efforts, like BDS," he has not distanced himself from several organizations leading the charge.

DeSantis, meanwhile, has positioned himself firmly against the BDS movement and is the co-author of legislation that will protect American businesses from being pressured into backing Israel boycotts. The issue is likely to be raised with both candidates as the gubernatorial contest heats up in a state with many Jewish voters.

DeSantis said he is concerned and dismayed by Gillum's ties to radical anti-Israel groups.

"In all my years in Florida, I’ve never seen a candidate for state office who has been as anti-Israel as Andrew Gillum," DeSantis told the Washington Free Beacon in an interview. "He opposes our embassy in Jerusalem, he does not recognize Jerusalem as Israel’s eternal and indivisible capital, and he even criticizes Israel’s response against Hamas [militants] in May of 2018. His anti-Israel views are part and parcel of his overall far left wing, Democrat socialist agenda. He doesn’t share the values of the vast majority of people in Florida with his position."
Former ambassador to U.N. Prosor writes scathing piece against Corbyn
Prosor wrote that during his tenure as ambassador to the UK from 2007 to 2011, “the extreme-Left had started to dominate debate on Israel, not with rational, legitimate criticism but with irrational, racist hatred. I saw antisemitic poison, tropes of ‘Zionist control,’ being injected from the political fringes into the arteries of British public life. Demonstrations outside my embassy turned violent.”

Prosor said that Corbyn was neither the most charismatic or the smartest of the politicians who attended anti-Israel gatherings at the time, “but he was without doubt one of the most committed. As was his director of strategy, Seamus Milne.”

Corbyn,” according to Prosor, “can’t solve Labour’s antisemitism problem, because he embodies it.” He called the Labour Party leader an “equal opportunities terrorist sympathizer,” saying that whatever the terrorist group – be it the PLO, Hezbollah, or Hamas – they could count on Corbyn’s support as long as their target was Israelis or Jews.

“Corbyn didn’t invent the crank politics of conspiracy theory and Jew-hatred. But he has taken it from the fringe meetings of the far-Left and placed it on the front-benches of the House of Commons,” Prosor wrote.

He concluded: “This is not just an issue for the Labour Party, or for a British Jewish community feeling threatened and vulnerable. Corbyn is an embarrassment for Britain. Around the world, all who, as I do, love and admire Britain are watching and hoping that the British public, famed for their decency, tolerance and sense of fair play, stand together and say, enough is enough.”
Top Corbyn aide said working in parliament despite rejected security clearance
A top aide to UK Labour leader Jeremy Corbyn was refused clearance required to work in parliament over security concerns but has worked there regularly anyway, entering with the help of other staff members, Huffington Post UK reported Wednesday.

Corbyn’s private secretary Iram Awan was hired in late 2017 but was denied clearance due to concerns by security services over her associates, the report said. There were no details on who those associates were or how they could compromise security.

Despite this, Awan has regularly been working in the Commons, the report said, with other Labour staffers routinely meeting her at the entrance and waving her in to security personnel, apparently as a supposed visitor. The report alleged that this behavior has been practiced for the past nine months.

“Visitor passes are for visitors only,” a parliament spokesperson told the news site. “They cannot be used to carry out work on the parliamentary estate. While we are unable to comment on specific cases, any alleged breach of the rules on passes will be investigated by the House authorities.”

The report said little is known of Awan, but noted that she has donated to Helping Households Under Great Stress, a group that seeks to “provide financial, emotional, and practical support and advice to Muslim households impacted by counter-terrorism, national security and extremism-related laws, policies and procedures in the UK and abroad.”
Another Top Corbyn Aide Working Without Security Clearance
Another top Corbyn adviser, Andrew Murray, has reportedly been working in the Labour Leader’s Commons office for eight months without the required security clearance. His application has been pending for over a year..

Andrew Murray was a member of the Communist Party of Great Britain until he joined Labour two years ago.

A quick glance at the parliamentary pass application form might explain why so many of Corbyn’s team have been having trouble. Question 30 asks if applicants have ever been associated with people or groups who have “intended to overthrow or undermine Parliamentary democracy by political, industrial, or violent means?” A number of Corbyn’s top team are probably going to need more than half a page to give full details on that one…
British MP of Palestinian Descent Condemns “anti-Semitic” Posters, as Labour Party Investigates Infiltration by Iran
A British Member of Parliament of Palestinian Arab descent has denounced the defacing of bus-stops in London with “Israel is a racist endeavour” posters as “blatantly anti-Semitic,” The Jewish Chronicle reported Thursday.

Liberal Democrat MP Layla Moran, who represents Oxford West and Abingdon, told the BBC: “I’m a Palestinian… The fact that this has come from a group that purportedly is speaking for Palestinians, I take great offence at myself, because I think it is blatantly antisemitic.” She clarified that “there are extremes in the Israeli government… But to say that an entire country is racist is entirely wrong.”

Moran was responding to comments made by Shadow Chancellor John McDonnell, a close ally of Labour leader Jeremy Corbyn, who stated that “It is not at all anti-Semitic to describe a state as racist.”

The posters were meant to mock the International Holocaust Remembrance Alliance (IHRA) definition of anti-Semitism, which Labour reluctantly adopted in full last week with a caveat vowing “free speech” on Israel. The IHRA definition, which gives calling “a state of Israel… a racist endeavour” as an example of anti-Semitism.
Why Rachel Shabi's 'alliance of colour' will go nowhere
Our token Mizrahi Corbynista has been gracing the columns of the Guardian as a pundit commentating on local politics. But on the question of the antisemitism rampant in the far-left of the UK Labour party, a note of anguish has been creeping into our Shabi's writings.
Rachel Shabi

In response to a Labour centrist MP's charge that Labour is 'institutionally racist', her latest piece acknowledges that antisemitism is a real problem in Jeremy Corbyn's faction. What about anti-Zionism? Is there such a thing as antisemitic anti-Zionism? 'Zionism is both racist and anti-racist', she fence-sits unhelpfully, despite having written a book portraying Mizrahim as victims of Israel's Ashkenazi establishment.

She finds the view common among leftwing ideologues that Jews are 'white' allies of the Christian West to be wrong. "We are a racialised minority' she protests. She herself is a Mizrahi Jew of Iraqi origin.

Her answer is to start a Jewish-black-Asian-Muslim alliance that would relaunch Jews as 'people of colour'. This alliance is not based on shared Judeo-Christian values. 'If there is a historic sharing of values it is a Jewish-Muslim one,' she writes.
Corbyn in 2015 – let British jihadis travel to Syria
Jeremy Corbyn’s antisemitic record has been under scrutiny for several weeks now, and rightly so. But let’s not forget other issues which are important, starting with our own national security.

He should never be entrusted with the security of the United Kingdom. It would be seriously endangered.

Look back to early 2015. Islamic State was on the march. No wonder – its strength and confidence had been boosted by a big influx of foreign fighters from Europe. They included Britain’s very own “Jihadi John”, who began his beheading spree in 2014.

Those fighters were a top priority for Western intelligence services, in large part because there were fears they could return to the West deeply indoctrinated, highly trained, and ready to carry out devastating terrorist attacks.

Jeremy Corbyn was asked about this issue at the time. He didn’t sound very convinced.
UK's Corbyn stuns by blasting Hungarian PM for anti-Semitism
British Labour Party leader Jeremy Corbyn accused Hungarian Prime Minister Viktor Orban of "pandering to anti-Semitism" ahead of a vote in the European Parliament over whether to censure Hungary for breaching core EU values.

Corbyn, himself under fire for rampant anti-Semitism in his party and for his own anti-Israel and anti-Semitic comments, stunned with a tweet on Tuesday that said, "Labour MEPs will vote to hold Viktor Orban's government in Hungary to account. The Conservatives must do the same, and [British Prime Minister] Theresa May should condemn his attacks on judicial and media independence, denial of refugee rights, and pandering to anti-Semitism and Islamophobia."

Labour has been battling accusations of anti-Semitism for months, and Corbyn has previously apologized for what he has described as "pockets" of anti-Semitism in his party.

Former U.K. Chief Rabbi Jonathan Sacks has called Corbyn an anti-Semite and said comments revealed last week that Corbyn made about British Zionists five years ago were the most offensive by a senior U.K. politician in half a century.
Anti-Israel Satmar group forges UK rabbis’ pro-Corbyn letter
A hassidic group disseminated a letter in support of Labour leader Jeremy Corbyn, who has come under fire for supporting antisemites, in which the signatures of haredi leaders in the UK were forged.

The Jewish Community Council of North London tweeted overnight Tuesday to “confirm and clarify this letter is fake and bears no authority from any of the assigned names.”

The letter, purportedly from the leadership of London’s haredi umbrella organization, the Union of Orthodox Hebrew Congregations, claims to have the support of the UOHC’s Principal Rabbinical Authority Ephraim Padwa, Senior Dayan (religious court judge) S. Friedman, and 27 others.
BREAKING: Leading UK Rabbis have released a letter to repudiate the false notion that British Jews are against @jeremycorbyn and/or the @UKLabour .
"We feel it's necessary to clarify that Jews have no connection with these irresponsible remarks!
— True Torah Jews (@TorahJews) September 9, 2018
US takes on anti-Israel BDS activities on university campuses
The U.S. Education Department's Office of Civil Rights has decided to adopt the international definition of anti-Semitism, which defines Judaism not only as a religion but also an ethnicity and includes holding Jews responsible for Israel's actions as a form of anti-Semitism, Israel Hayom learned Wednesday.

According to a letter written by Assistant Education Secretary for Civil Rights Kenneth L. Marcus to the Zionist Organization of America, anyone who acts "to deny the right of the Jewish people to self-determination, on the grounds that the State of Israel's existence is a racist endeavor" or applies double standards to Israel that it does not apply to any other democratic country will be deemed an anti-Semite.

The ZOA lauded what it called the "groundbreaking decision," saying, "This definition accurately addresses how anti-Semitism is expressed today; it recognizes that Jew-hatred can be camouflaged as anti-Israelism or anti-Zionism. The OCR is not only reassessing the evidence already in the record; the agency is also going to determine whether a hostile environment for Jewish students currently exists at Rutgers [University]."

Pro-Palestinian activists in the United States have warned the move will hinder pro-Palestinian efforts as any such activity will be deemed anti-Semitic.
Daphne Anson: Rogues with a Brogue
In Belfast on Tuesday, heavily outnumbering pro-Israel counter-protesters, dozens of raucous members of an outfit calling itself BDS Ireland (part of part of the Boycott, Divestment and Sanctions movement along with Sinn Fein) yell their opposition to Northern Ireland's first friendly football match against Israel in Windsor Park. Over 5,000 Israel-haters signed a petition demanding the match's cancellation, but the match went ahead, the home team winning by 3 goals to nil.

Footballer James Rodríguez Caves In To Israel Haters After Being Subjected to Online Abuse
Colombian footballer James Rodríguez, considered one of the best players of his generation, was recently in Israel. And he wanted his almost 100 million (!) followers on social media to know just how amazing it is here. So he posted the following on Twitter, Instagram and Facebook.

The Israeli flag triggered the haters, resulting in the young footballer being subjected to a barrage of hate – which he clearly was not prepared for. Unfortunately, he caved in, and replaced the tweet and postings with the identical picture and wording, sans Israeli flag.

Interestingly enough, his next posting is still up, despite clearly identifying Israel and not “Palestine.”
Honest Reporting: Surf’s Up in Gaza: Riding the Anti-Israel Wave
In a long and somewhat labyrinthine article in The Independent, author and academic Andy Martin focuses on a documentary film about a Palestinian surfing club in Gaza.

Feeding the prejudices of The Independent’s readership, every activity in Gaza, even leisure, is turned into an opportunity to attack Israel and spread inaccuracies and falsehoods.

Martin makes it crystal-clear what he thinks about Israel:
Set aside the whole dubious history of Israel, the events of 1948 that the Palestinians refer to as the Nakba (or “Catastrophe”), the Six Day War, the occupation of the West Bank and East Jerusalem, the continued imperial expansion and slow-motion ethnic cleansing known euphemistically as “settlement”. Even set aside, most recently, the “Nation State” law that spells out, reiterates and reinforces a condition of apartheid. Do you want to know the final straw? They won’t allow surfboards into Gaza. They are obviously part of some sinister Hamas-inspired conspiracy. They may exhibit coded subversive messages on stickers applied to their decks, such as “Life’s a beach”, or “Surf’s Up, Dude”.

Are surfboards “illegal” in Gaza?

So let’s set aside the whole dubious references to “ethnic cleansing” and “apartheid” that Martin casually tosses into the water and deal with his allegations regarding surfboards. Are they really “illegal” in Gaza as the article and the headline suggest?
Merkel: ‘No excuse’ for far-right violence, attack on kosher restaurant
Chancellor Angela Merkel assured parliament Wednesday she takes seriously Germans’ concerns about crimes committed by migrants and pledged a strong response, but condemned recent demonstrations as “hateful,” saying there is “no excuse” for expressions of hate, Nazi sympathies or violence in response.

The comments come after the killing of a German man for which an Iraqi and a Syrian have been arrested prompted days of anti-migrant protests in the eastern German city of Chemnitz that at times turned violent.

Neo-Nazis were seen giving the stiff-armed Hitler salute in the largest demonstration, the day after the killing, which attracted some 6,000 people, and on the sidelines of the protest masked men threw stones and bottles at a kosher restaurant yelling “Jewish pig, get out of Germany.”

The day before, in spontaneous protests by hundreds immediately after the killing, several foreigners were attacked and injured in the streets.

Merkel assured lawmakers that her government was equally aware of its responsibility to take the wider concerns of the public seriously, and that it was working with “all resolution” on the issue.

“We are especially troubled by the severe crimes in which the alleged perpetrators were asylum-seekers,” she said. “This shocks us… (and) such crimes must be investigated, the perpetrators have to be taken to court and punished with the severity of the law.”
Pig guts thrown at office of Australian lawmaker whose wife is Jewish
The district office of an Australian member of Parliament whose wife is Jewish was targeted by racists who threw pig’s entrails at the front door.

The early Wednesday morning attack follows an earlier attack by the neo-Nazi group Antipodean Resistance on September 1 on another office belonging to the same MP, Labor lawmaker Mike Kelly, in which the group plastered swastika stickers on the door.

Mike Kelly’s wife is Jewish.

The attack involving the pig’s entrails took place in the New South Wales city of Queanbeyan, located just 10 miles from Australia’s capital Canberra. The swastika attack took place in the coastal town of Bega, 135 miles from Queanbeyan.

A former military attorney, Col. Mike Kelly joined the Australian military serving in Somalia, East Timor and Bosnia, and was among senior Australian military personnel who served in the Iraq War. In 1993, he was awarded the Chief of the General Staff Commendation.

“The series of attacks directed at my electorate offices are evidence of the need for constant vigilance and the confrontation of extremist groups in our country,” Kelly told JTA.

“If the perpetrators think that they will intimidate me into refraining from defending Israel or supporting our Jewish community they are deluded,” Kelly said. “Actions like this only spur me to greater efforts and commitment. I have faced much worse threats in my Army career and I will continue to fight racism and ignorance wherever I find it.”
Database helps Jewish families obtain properties' restitution in Poland
In the small park behind the only synagogue in this city to have survived World War II, Yoram Sztykgold looks around with a perplexed expression.

An 82-year-old retired architect, Sztykgold immigrated to Israel after surviving the Holocaust in Poland. He tries in vain to recognize something from what used to be his childhood home.

“It’s no use,” he says after a while. “To me this could be anywhere.”

Sztykgold’s unfamiliarity with the part of Grzybowska Street where he spent his earliest years is not due to any memory loss. Like most of Warsaw, his parents’ apartment building was completely bombed out during the war and leveled, along with the rest of the street. His former home is now a placid park that is a favorite hangout for mothers pushing baby carriages and pensioners his age.

The dramatic changes in Warsaw’s landscape have bedeviled efforts for decades to obtain restitution for privately owned properties like Sztykgold’s childhood home, making it difficult for survivors like him to identify assets that may have belonged to their families.

But for many restitution claimants in the capital, identifying assets will become easier thanks to a recent breakthrough with an unlikely source: the establishment of a first-of-its-kind searchable database. Users need only type in the name of their family to obtain a complete overview of all the assets they may claim under a new restitution drive in Warsaw.
Israel’s Ride Vision plans to make motorcycling safer
As the race heats up toward the launch of autonomous vehicles, state-of-the-art technologies like ADAS (Advanced Driver Assistance Systems) are being designed to prevent collisions. But motorcycles have been largely overlooked in the process.

According to a 2018 US National Highway Traffic Safety Administration report, fatalities in traffic crashes occur nearly 28 times more frequently for motorcycles than for passenger car occupants, and motorcycle drivers comprise 17 percent of all driver- and passenger-related fatalities. There were 5,286 fatal motorcycle crashes in 2016 in the US, a 5.1 percent increase from 2015, according to NHTSA.

Uri Lavi and Lior Cohen are avid motorcycle riders who want to bring computer safety smarts to two-wheelers. Their company Ride Vision just raised a $2.5 million seed round from YL Ventures for its patented CAT (Collision Aversion Technology) for motorcycles.

Lavi and Cohen previously worked together in Israel’s homeland security industry. Lavi went on to become CEO of PicScout and brought in Cohen to serve as VP of R&D.

PicScout developed a technology for identifying images on the web that may have been used or modified without permission and then notifying the copyright owners. The company was acquired in 2011 by Getty Images for $20 million.

Lavi and Cohen gained expertise in technologies such as artificial intelligence, neural networks, computer vision and threat detection that is relevant in their newest venture, though it is quite different.
’70s rock band America heading to Israel to ‘give everyone a night off’
When Dewey Bunnell and Gerry Beckley of the soft rock band America finally arrive in Israel for their long-planned October 9 and 10 performances in Caesarea, they intend to not only entertain the crowds, but also to learn a little about the country.

“We’re all geared up to see stuff,” said Bunnell, who splits his time between homes in Wisconsin and Los Angeles, and has never been to Israel.

Bunnell, 66, and Beckley, 65, are two of the original members of the 1970s-era band, which has been performing continuously since it began as a high school cover band.

They had planned to perform in Israel in the summer of 2014, but canceled due to the conflict in Gaza.

Even now, said Bunnell, speaking from his lakeside home in Wisconsin, he doesn’t know that much about Israel and its political situation.

“Every place has their issues, and I have to profess, I’m not versed in Israel,” he said. “I obviously follow the news, but the region is new to me. It’s just a matter of us going in with our eyes wide open and enjoying it. Our job is to entertain people; we’ve always made that point.”
Kim Kardashian signs deal with Israeli eyewear company
World-famous supermodel and reality TV star Kim Kardashian West will be heading back to Israel next year.

Israeli sunglasses company Carolina Lemke Berlin announced Wednesday that Kardashian West has signed a deal to join Bar Refaeli - the face of the brand - in an upcoming collaboration.

Supermodel and reality TV star Kim Kardashian West will be heading back to Israel next year.

Israeli sunglasses company Carolina Lemke Berlin announced Wednesday that Kardashian West has signed a deal to join Bar Refaeli – the face of the brand – in an upcoming collaboration.

The supermodel will fly to Israel in March “for a visit as part of our cooperation,” the company said. In addition to appearing in advertisements for the brand, Kardashian West will also design her own line of glasses for a limited-edition series sold by Carolina Lemke.

In a press release on Wednesday, the company said it selected Kardashian West, “the most famous woman in the United States, to be the face of the brand for at least two years, beginning in summer 2019.” The company said it will also be launching a website aimed at selling its products in the United States. Until now, the brand has been focused on Israel and Europe.
Afghan man sends gravely ill kids to heart center in Israel
Noorina is five years old and lives in Afghanistan. In July, her father brought her to Israel for lifesaving heart surgery arranged by Save a Child’s Heart (SACH), an Israeli medical charity based at Wolfson Medical Center in Holon.

When she is older, Noorina may be surprised to learn that an Afghan stranger willingly put himself and his family at risk to give her the gift of health.

Noorina was the fifth child from Afghanistan sent to SACH through the efforts of that same young Muslim father, who asked ISRAEL21c to call him Jangzapali, a pseudonym to hide his true identity.

“Jangzapali,” he explains, “means ‘victim of war.’”

Jangzapali is involved in all types of charity work and has built up an international social-media network over the past few years. Children needing urgent medical care are his top priority.

“Almost 10,000 [medical need] cases are registered with the Afghan Red Crescent. They are unable to do all cases, so through our broad network on social media, we arrange surgery for poor children in Afghanistan or India. For complicated cases they cannot handle, we work with Save a Child’s Heart,” he says.
Artist turns Kassam rockets into flowers and mezuzahs
Holding aloft a missile fragment, Gaza-area artist Yaron Bob explains to a Kann interviewer that not long ago the projectile had penetrated an Israeli car.

"The incredible Iron Dome is Israel's missile defense system. When rockets are fired into Israel, the Iron Dome shoots a missile to intercept the rocket, preventing destruction and the murder of innocent lives," says the description of one item on Bob's website.

"Bring Israel's heavenly protection into your home with this limited-edition Mezuzah made from an actual Iron Dome missile!"

"I cut it into rectangles, polish it, and craft the mezuzot. My mezuzot get grabbed up all over the world. They cost between $150-200."

Asked how much profit he can extract from a single Kassam missile, Bob makes a quick calculation and answers "About $2,000 per Kassam."

A tour of his storage area reveals a representative variety of the weapons constantly aimed at Israel's children: "Here are rockets that fell a month ago; this one's a 120mm, this one's a Grad. Wanna see a whole Grad?" he asks deferentially, and lifts one from the pile. "Made in China," he says. "This one here fell in Dimona."

The interviewer asks Bob,"How do you have all these types?" and Bob retorts, "How do they?"

We have lots of ideas, but we need more resources to be even more effective. Please donate today to help get the message out and to help defend Israel.
          Apple Special Event 2018 iPhone 發表會 懶人包一次搞定      Cache   Translate Page      



一年一度的蘋果秋季發表會(俗稱 iPhone 發表會)剛剛已經落幕,今年的主題完全集中在 Apple Watch 以及  iPhone 上面,完全沒有提起其他的品項(iPad、Mac 等等)但即便如此,依然是非常精采的發表會,帶出了非常多精采的新科技以及新功能,那我們現在就來整理一下這場有什麼重點需要看吧。


Apple Watch 4 (點我跳至該段落)

iPhone XS、iPhone XS Max、iPhone XR(點我跳至該段落)


本文由 iPhone 周邊商品的優質品牌 moshi 贊助,他們也第一時間推出了 iPhone 的各種保護殼,在文末也會有實體拍攝照片。



▼ 現在每個發表會的開頭都會有一段影片以及小短劇,已經成為了固定的模式。

▼ 這次的短劇則是一位員工非常努力的在總部奔跑,只為了在時間內把看起來最機密的盒子交給準備要上台演講的庫克,不過盒子裡面....只是一支簡報筆 XD

Apple Watch 4 

▼ 廢話不多說,開場第一個主題就直奔重點,Apple Watch。

▼ ​背後的偵測器這次可是大有來頭,我們繼續往下看。


▼ 錶面增大超過 30 % 的第四代 Apple Watch,新的尺寸是 40mm 與 44mm。

▼ ​搭配新的 watchOS,可以自定義更多的按鈕以及表面。麥克風以及喇叭的位置也經過調整,音量更大,收音更好。

▼ ​側邊按鈕提昇有觸覺回饋,轉動起來更有真實感。

▼ ​而這次的晶片也升級為 S4 晶片,效能大幅提昇。

▼ ​首先終於提升為 64-bit,也因此可以達到原先的兩倍速。不過這次重點在於加速感應器與陀螺儀,可以更快偵測到更精準的動作。同時採用了名為 LTPO 的全新顯示技術來提升能源效率

▼ ​已經可以判斷跌倒是哪一種跌倒,如果是倒地不起等危險的情況,都可以自動撥號給救護車。

▼ ​在醫療上就這樣而已嗎?完全不只,這次新的感應器,可是大有來頭。

▼ ​心臟偵測功能新增:低心率、心房顫動(心律不整)

▼ ​世界第一,新增心電圖功能,而且已經 FDA(美國食品藥品監督管理局) 認可!

我們 MacUknow 的編輯中,剛好有一位是醫療相關產業,寫到了這樣的評語(原文連結

新的 Apple Watch 有什麼厲害的?

心電圖(Electrocardiogram, ECG)一般來說需要 12 個導極(就是會在你身上貼上 10個貼片)同時紀錄心臟的 12 組電壓變化。詳細機制可以參考

但是新的第四代 Apple Watch,使用錶殼背後的光學心率偵測器,加上錶冠上的偵測器,可以使用一個手錶,就能夠實時量測心電圖。(準確率跟真實醫療用的心電圖的差別尚未知,但我們可以預設醫療用的 12 lead ECG 還是可以提供更精確有參考價值的測量)


心肌梗塞(Myocardial infarction, MI)跟心房震顫(Atrial Fibrillation, AF)(心律不整的一種)不是一定有徵兆的,也不一定會讓你覺得不舒服,而且你去做健康檢查的時候,做心電圖可能都是正常的,因此當你覺得胸悶、胸痛,才去醫院檢查,也可能得到一個正常的心電圖。

發現這個 gap 了嗎?如果我們可以在你覺得胸悶、胸痛、暈眩的當下,馬上取得心電圖,可以讓心臟科對於心臟病的風險有更仔細的評估,以前你可以去做 24 小時 ECG,但你可以想像,這一點都不舒適。

現在有了 Apple Watch 4,等於是全時監控你的心臟,它會在發現你有不正常的心率過高/心率過低的時候警告你,而且這時候你已經有一個完整的心跳紀錄,可以直接拿去就診參考。

這個其實對年輕健康沒有病史的人來說沒有那麼大的影響,但是對心臟病患者來說,是可以把 Apple Watch 當作守護者,隨時關注心臟的狀況。

簡單來說,Apple Watch 4 的心臟健康功能達到了“全時監控”、“完整記錄”、“即時警告”的功能,這是以前的醫療器材所很難做到的。

而且現在 Apple Watch 4 新增了跌倒偵測,如果一個患者因為心肌梗塞失去意識,跌倒後一分鐘沒有動作,Apple Watch 自動撥打緊急電話求救,整個救命鏈都串起來了。

最厲害的是 Apple Watch 4 在美國已經通過食品藥物管制處 FDA 核准上市,表示蘋果是認真要把 Apple Watch 當作非常嚴肅的救命產品。能夠把資金花在這種研發上面,地球上只有蘋果做得到了。希望台灣也能夠趕快審核通過上市。


免責聲明: 以上是針對蘋果發布的資料所做的評論,本網站無法為 Apple Watch 的醫療功能負責,若有真實的醫療詢問需求,請詢問您的家醫科醫師。

▼ ​雖然這次的尺寸是 40mm 與 44mm,但官方宣布可相容於目前的 38mm 與 42mm 的錶帶。這點是蠻佛心的。

▼ ​售價的部份,從 399美金到有 LTE 的 499 美金,而 3 代也降價到了 279 美金。

▼ ​但既然牽扯到醫療功能,台灣就勢必無法第一波上市了,事實上台灣對於醫療相關的法條還蠻嚴格的,所以我個人對於上市時間並不樂觀。

▼ ​但如果你有朋友可以前往香港日本等國家代購的話,則是 9/14 開始預購,9/21 正式販售。

▼ ​而最新的 watchOS 5 也會在 9/17 發佈更新,現有的 Apple Watch 用戶還是可以先睹為快。


iPhone XS、iPhone XS Max、iPhone XR

▼ ​先介紹目前 iPhone X 的接班人,iPhone XS 以及 XS Max (等於今年的高規機種有分大小尺寸)

▼ ​首先,終於更新防水等級,來到了 IP68,也就是潛水兩公尺可以維持 30 分鐘安全。

▼ ​就算掉到游泳池,或是打翻啤酒等飲料,現在都可以安心許多了。

▼ ​而這次除了有更棒的 HDR 效果之外,這次的 Super Retina 螢幕來到了 458 ppi,搭配 OLED 的極致對比,可以想像會是超棒的視覺體驗。 

▼ ​然而大部分的規格都在事前被爆料完了,所以看到這裡的時候覺得有點力不從心,開始想睡覺。

▼ ​但還是得提到這次的立體聲有很大的進步,無論是玩遊戲還是看電影,應該都可以有明顯的感受。

▼ ​正面的部份,Face ID 的規格並沒有變化,維持一貫的規格。

▼ ​不過,這次的 Neural Networks(人工神經網路)可是有大大的提升,連帶的 Face ID 的辨識速度也會有提昇。

▼ ​這得利於今年的新晶片 A12 Bionic,首個 7nm 製成規格的晶片。

▼ ​今年除了有 4 核 GPU、6 核 CPU,還有 8 核的人工神經網路,這讓今年的 iPhone 在機器學習這件事情上可以說是大大的提升。

▼ ​拿個可怕的數據,去年的 A11 晶片如果每秒可以有 6000 億次的運算。

▼ ​那麼今年的 A12 則是來到了 5 兆次的運算,幾乎是10倍的。

▼ ​所以在 A12上,我們如果想用關鍵字尋找、判斷照片。會更快更精準。

▼ ​連帶的,所有跟機器學習有關的運算,都是大幅度的提升。

▼ ​比如說 Siri 快速鍵。

▼ ​而在 Core ML 方面,可以達到 9 倍快,但所需要的能耗卻更少。

▼ ​連帶的,在 AR 方面的運算也可以有大幅提昇,他們就現場展示了幾個 App,比如這個籃球的訓練程式,不但可以即時的偵測投籃的命中率,還可以即時抓到人的動作,以及球的弧度等等。

▼ ​進而即時做出統計數據,幫助球員改善。

▼ ​甚至在立體空間的轉換判斷上也很有一套。

▼ ​除了工具用途,在遊戲上當然也可以得到很棒的 AR 遊戲體驗(不過在旁人眼中看來大概還是很怪 XD )

▼ ​雖然相機鏡頭本身的硬體沒大幅提昇,但就靠著這逆天的晶片,還是有著令人讚嘆的功能。

▼ ​如果單純看鏡頭的畫素,其實比起 iPhone X 並沒有什麼提昇。

▼ ​但這次的影像訊號處理(ISP)則是有更進一步變化,現在的手機在拍照的瞬間,已經是經過了無數的運算,幫你調整不同的光線、白平衡、對焦等等。

▼ ​但這次靠著 A12,ISP 能處理的訊號更多了,包含了臉部偵測、建模等等。

▼ ​我們拍下的每一張照片,都是經過了 1 兆次的運算所拍攝出來的,這非常非常的驚人。

▼ ​也因此,這次的 HDR 照片也會更令人讚嘆。

▼ ​傳統的 HDR 就是在逆光或是強光等光線條件太極端時,會在不同的曝光條件拍攝,然後合成一張光線完美的照片。但這次的 HDR,則是會偵測更多不同光線條件的照片。

▼ ​最後判斷出幾張比較好的,然後合成一張,靠著強大的晶片,即便是快速移動的物體,也都可以完美的拍攝。

▼ ​接著有幾張照片展示,即便是逆光,我們都可以看到臉的這一側的細節還是很完美。

▼ ​更極端一點,超級逆光,但水珠的畫面還是沒有錯失細節。帶來更廣的動態範圍

▼ ​而這次的 Bokeh,目前沒有官方中文譯名,只寫到「先進的散景效果與景深控制」

▼ ​簡單來說,就是透過強大的運算,來模擬調整關圈造成的景深,可以讓背景清楚,或是模糊的散景。

▼ ​既然拍照能力提昇這麼多,當然錄影的能力也大幅提昇,更快的感應、以及錄製立體聲。

▼ ​在發表會上展示的影片,在聲音方面確實令人印象深刻,腳踏車行經路面的聲響由左至右,臨場感很強。而畫面上的亮處暗處也都處理的很好。

▼ ​電池的部份,雖然這次 A12 非常強大,但同時也很省電,所以 iPhone XS 以及 iPhone XS Max 的電池壽命比起 iPhone X 還延長了 0.5 小時以及 1.5 小時

▼ ​另外也跟傳聞中的一樣有雙卡雙待,不過並不是可以插兩張 SIM 卡,只能插一張實體卡,另一張為 eSIM,只有中國的版本是兩張實體卡。

▼ ​也不忘了宣傳一下,這次的 iPhone 是由 100% 的可再生能源製造,非常環保,大家拍拍手。

▼ ​接著就迎來了這次的平價版本的 iPhone XR,6.1吋螢幕,單鏡頭,多色。藍色、白色、黑色、黃色、珊瑚色以及紅色

▼ ​既然是平價版,但規格部分可毫不馬虎,一樣使用 A12 晶片,不過螢幕方面則是解析度稍微低一點的 Liquid Retina HD 顯示器(比起 iPhone XS 的458 ppi,iPhone XR 則是 326 ppi)

▼ 除了解析度,螢幕也不是 OLED 而是 LCD。

▼ 也沒有 3D Touch,而是使用 Haptic touch,基本上就是使用長按來取代用力按。

▼ 正面也一樣有 Face ID。

▼ 晶片也使用了同等級的 A12,不像當年的平價 iPhone 5C 是使用了前一代晶片。

▼ 鏡頭雖然是單鏡頭,但也一樣具有人像模式以及散景效果與景深控制

▼ 規格上可以看到,雖然是平價機種,但硬體規格上幾乎都跟 iPhone XS 一樣,螢幕甚至還大一點點。

▼ 大家最關心的售價,從 64G 開始,價格為 749 元,台灣售價為 26900 起。10/19 開始預購 10/26 正式販售。

▼ 至於 iPhone XS 以及 XS Max 的售價,則是台幣 35,900 以及 39900 起。9/14 下午開始預購,9/21 正式販售。

▼ 感恩直營、讚嘆直營,繼續維持第一波販售。

▼ 而原本的 iPhone 7、8 也都有對應的降價。

至於 iPhone XS (35900) 以及平價版本 iPhone XR(26900) 價格相差了9000,既然規格幾乎都差不多,那麼到底這9000買到什麼呢?


  • XS 68防水、XR6防水
  • XS 雙鏡頭,XR 單鏡頭。
  • XS 為 OLED 螢幕(這個確實是高成本)XR為 LCD 螢幕。
  • XS 有 3D Touch, XR 沒有
  • XS 提早一個月拿到、尊爵不凡

這幾點有沒有值得 9000 台幣,就看你自己囉。

另外選擇 XS 的好處是,如果你手上的是 iPhone X,那麼周邊手機殼等等是可以給 XS 沿用的,一樣都是 5.8 吋。


▼ 但優質的周邊廠商 Moshi 也瞬間推出了 XR 以及 XS Max 的周邊,並且借我拍照示意,這邊就擺一張大小示意圖。由左至右為 iPhone X(與 XS 一樣大小),然後是 XR 裝殼,最右邊是 XS Max 裝殼。

▼ 無論是透明殼、防撞殼、還是各種用途的手機殼手機套,Moshi 都已經準備好了,我們只需要安心的預購手機即可。

▼ 無論是 6.1 吋的 XR 周邊,還是 6.5 吋的 XS Max 周邊,都一應俱全。






Apple Certified Support Professional 


          Comment on First Wave of Spiking Neural Network Hardware Hits by Nicole Hemsoth      Cache   Translate Page      
I did (as said in the article) think using CIFAR 10 was a little out there. There is no way to do a real apples to apples with this device against anything else out there so far really.
          First Wave of Spiking Neural Network Hardware Hits      Cache   Translate Page      
          Data Scientist - eBay Inc. - Austin, TX      Cache   Translate Page      
GBM, logistic regression, clustering, neural networks, NLP Strong analytical skills with good problem solving ability eBay is a Subsidiary of eBay....
From eBay Inc. - Sat, 25 Aug 2018 08:05:35 GMT - View all Austin, TX jobs
          Learning Robust Features and Latent Representations for Single View 3D Pose Estimation of Humans and Objects      Cache   Translate Page      
Estimating the 3D poses of rigid and articulated bodies is one of the fundamental problems of Computer Vision. It has a broad range of applications including augmented reality, surveillance, animation and human-computer interaction. Despite the ever-growing demand driven by the applications, predicting 3D pose from a 2D image is a challenging and ill-posed problem due to the loss of depth information during projection from 3D to 2D. Although there have been years of research on 3D pose estimation problem, it still remains unsolved. In this thesis, we propose a variety of ways to tackle the 3D pose estimation problem both for articulated human bodies and rigid object bodies by learning robust features and latent representations. First, we present a novel video-based approach that exploits spatiotemporal features for 3D human pose estimation in a discriminative regression scheme. While early approaches typically account for motion information by temporally regularizing noisy pose estimates in individual frames, we demonstrate that taking into account motion information very early in the modeling process with spatiotemporal features yields significant performance improvements. We further propose a CNN-based motion compensation approach that stabilizes and centralizes the human body in the bounding boxes of consecutive frames to increase the reliability of spatiotemporal features. This then allows us to effectively overcome ambiguities and improve pose estimation accuracy. Second, we develop a novel Deep Learning framework for structured prediction of 3D human pose. Our approach relies on an auto-encoder to learn a high-dimensional latent pose representation that accounts for joint dependencies. We combine traditional CNNs for supervised learning with auto-encoders for structured learning and demonstrate that our approach outperforms the existing ones both in terms of structure preservation and prediction accuracy. Third, we propose a 3D human pose estimation approach that relies on a two-stream neural network architecture to simultaneously exploit 2D joint location heatmaps and image features. We show that 2D pose of a person, predicted in terms of heatmaps by a fully convolutional network, provides valuable cues to disambiguate challenging poses and results in increased pose estimation accuracy. We further introduce a novel and generic trainable fusion scheme, which automatically learns where and how to fuse the features extracted from two different input modalities that a two-stream neural network operates on. Our trainable fusion framework selects the optimal network architecture on-the-fly and improves upon standard hard-coded network architectures. Fourth, we propose an efficient approach to estimate 3D pose of objects from a single RGB image. Existing methods typically detect 2D bounding boxes and then predict the object pose using a pipelined approach. The redundancy in different parts of the architecture makes such methods computationally expensive. Moreover, the final pose estimation accuracy depends on the accuracy of the intermediate 2D object detection step. In our method, the object is classified and its pose is regressed in a single shot from the full image using a single, compact fully convolutional neural network. Our approach achieves the state-of-the-art accuracy without requiring any costly pose refinement step and runs in real-time at 50 fps on a modern GPU, which is at least 5X faster than the state of the art.
          Artificial Intelligence for Robotics      Cache   Translate Page      

Bring a new degree of interconnectivity to your world by building your own intelligent robots Key Features Leverage fundamentals of AI and robotics Work through use cases to implement various machine learning algorithms Explore Natural Language Processing (NLP) concepts for efficient decision making in robots Book Description Artificial Intelligence for Robotics starts with an introduction to Robot Operating Systems (ROS), Python, robotic fundamentals, and the software and tools that are required to start out with robotics. You will learn robotics concepts that will be useful for making decisions, along with basic navigation skills. As you make your way through the chapters, you will learn about object recognition and genetic algorithms, which will teach your robot to identify and pick up an irregular object. With plenty of use cases throughout, you will explore natural language processing (NLP) and machine learning techniques to further enhance your robot. In the concluding chapters, you will learn about path planning and goal-oriented programming, which will help your robot prioritize tasks. By the end of this book, you will have learned to give your robot an artificial personality using simulated intelligence. What you will learn Get started with robotics and artificial intelligence Apply simulation techniques to give your robot an artificial personality Understand object recognition using neural networks and supervised learning techniques Pick up objects using genetic algorithms for manipulation Teach your robot to listen using NLP via an expert system Use machine learning and computer vision to teach your robot how to avoid obstacles Understand path planning, decision trees, and search algorithms in order to enhance your robot Who this book is for If you have basic knowledge about robotics and want to build or enhance your existing robot's intelligence, then Artificial Intelligence for Robotics is for you. This book is also for enthusiasts who want to gain knowledge of AI and robotics. Downloading the example code for this book You can download the example code files for all Packt books you have purchased from your account at If you purchased this book elsewhere, you can visit and register to have the files e-mailed directly to you.

          Extreme Learning Machines (ELMs) within Artificial Intelligence (AI)      Cache   Translate Page      

Learn all about the Extreme Learning Machine (ELM) Artificial Neural Network (ANN), and how you can best leverage it within Artificial Intelligence (AI) and data science projects including within regression, alternate feature generation, and classification. We cover the ELM architecture along with the ELM packages available in Julia and Python. We’ll conclude by comparing ELMs to multilayer perceptrons (MLPs).

          Audio Signal Processing Engineer - GuruLink - Montréal, QC      Cache   Translate Page      
Support Vector Machines, Hidden Markov Models, Deep Neural Network architectures); Audio Signal Processing Engineer....
From GuruLink - Tue, 04 Sep 2018 04:37:30 GMT - View all Montréal, QC jobs
          Data Scientist - eBay Inc. - Austin, TX      Cache   Translate Page      
GBM, logistic regression, clustering, neural networks, NLP Strong analytical skills with good problem solving ability eBay is a Subsidiary of eBay....
From eBay Inc. - Sat, 25 Aug 2018 08:05:35 GMT - View all Austin, TX jobs
          Deep learning to predict the lab-of-origin of engineered DNA      Cache   Translate Page      
Deep learning to predict the lab-of-origin of engineered DNA Nielsen, Alec Andrew; Voigt, Christopher A. Genetic engineering projects are rapidly growing in scale and complexity, driven by new tools to design and construct DNA. There is increasing concern that widened access to these technologies could lead to attempts to construct cells for malicious intent, illegal drug production, or to steal intellectual property. Determining the origin of a DNA sequence is difficult and time-consuming. Here deep learning is applied to predict the lab-of-origin of a DNA sequence. A convolutional neural network was trained on the Addgene plasmid dataset that contained 42,364 engineered DNA sequences from 2230 labs as of February 2016. The network correctly identifies the source lab 48% of the time and 70% it appears in the top 10 predicted labs. Often, there is not a single “smoking gun” that affiliates a DNA sequence with a lab. Rather, it is a combination of design choices that are individually common but collectively reveal the designer.
          FDA grants breakthrough designation to AliveCor’s KardiaK hyperkalemia software      Cache   Translate Page      

AliveCor said this week it won breakthrough device designation from the FDA for its KardiaK software platform intended to screen for hyperkalemia, or elevated levels of blood potassium, without the need for drawing blood. The Mountain View, Calif.-based company said the KardiaK system uses a proprietary deep neural network intended to detect hyperkalemia using data from […]

The post FDA grants breakthrough designation to AliveCor’s KardiaK hyperkalemia software appeared first on MassDevice.

          Artificial Intelligence for Robotics      Cache   Translate Page      

Bring a new degree of interconnectivity to your world by building your own intelligent robots Key Features Leverage fundamentals of AI and robotics Work through use cases to implement various machine learning algorithms Explore Natural Language Processing (NLP) concepts for efficient decision making in robots Book Description Artificial Intelligence for Robotics starts with an introduction to Robot Operating Systems (ROS), Python, robotic fundamentals, and the software and tools that are required to start out with robotics. You will learn robotics concepts that will be useful for making decisions, along with basic navigation skills. As you make your way through the chapters, you will learn about object recognition and genetic algorithms, which will teach your robot to identify and pick up an irregular object. With plenty of use cases throughout, you will explore natural language processing (NLP) and machine learning techniques to further enhance your robot. In the concluding chapters, you will learn about path planning and goal-oriented programming, which will help your robot prioritize tasks. By the end of this book, you will have learned to give your robot an artificial personality using simulated intelligence. What you will learn Get started with robotics and artificial intelligence Apply simulation techniques to give your robot an artificial personality Understand object recognition using neural networks and supervised learning techniques Pick up objects using genetic algorithms for manipulation Teach your robot to listen using NLP via an expert system Use machine learning and computer vision to teach your robot how to avoid obstacles Understand path planning, decision trees, and search algorithms in order to enhance your robot Who this book is for If you have basic knowledge about robotics and want to build or enhance your existing robot's intelligence, then Artificial Intelligence for Robotics is for you. This book is also for enthusiasts who want to gain knowledge of AI and robotics. Downloading the example code for this book You can download the example code files for all Packt books you have purchased from your account at If you purchased this book elsewhere, you can visit and register to have the files e-mailed directly to you.

          Extreme Learning Machines (ELMs) within Artificial Intelligence (AI)      Cache   Translate Page      

Learn all about the Extreme Learning Machine (ELM) Artificial Neural Network (ANN), and how you can best leverage it within Artificial Intelligence (AI) and data science projects including within regression, alternate feature generation, and classification. We cover the ELM architecture along with the ELM packages available in Julia and Python. We’ll conclude by comparing ELMs to multilayer perceptrons (MLPs).

          Artificial Intelligence for Robotics      Cache   Translate Page      

Bring a new degree of interconnectivity to your world by building your own intelligent robots Key Features Leverage fundamentals of AI and robotics Work through use cases to implement various machine learning algorithms Explore Natural Language Processing (NLP) concepts for efficient decision making in robots Book Description Artificial Intelligence for Robotics starts with an introduction to Robot Operating Systems (ROS), Python, robotic fundamentals, and the software and tools that are required to start out with robotics. You will learn robotics concepts that will be useful for making decisions, along with basic navigation skills. As you make your way through the chapters, you will learn about object recognition and genetic algorithms, which will teach your robot to identify and pick up an irregular object. With plenty of use cases throughout, you will explore natural language processing (NLP) and machine learning techniques to further enhance your robot. In the concluding chapters, you will learn about path planning and goal-oriented programming, which will help your robot prioritize tasks. By the end of this book, you will have learned to give your robot an artificial personality using simulated intelligence. What you will learn Get started with robotics and artificial intelligence Apply simulation techniques to give your robot an artificial personality Understand object recognition using neural networks and supervised learning techniques Pick up objects using genetic algorithms for manipulation Teach your robot to listen using NLP via an expert system Use machine learning and computer vision to teach your robot how to avoid obstacles Understand path planning, decision trees, and search algorithms in order to enhance your robot Who this book is for If you have basic knowledge about robotics and want to build or enhance your existing robot's intelligence, then Artificial Intelligence for Robotics is for you. This book is also for enthusiasts who want to gain knowledge of AI and robotics. Downloading the example code for this book You can download the example code files for all Packt books you have purchased from your account at If you purchased this book elsewhere, you can visit and register to have the files e-mailed directly to you.

          Extreme Learning Machines (ELMs) within Artificial Intelligence (AI)      Cache   Translate Page      

Learn all about the Extreme Learning Machine (ELM) Artificial Neural Network (ANN), and how you can best leverage it within Artificial Intelligence (AI) and data science projects including within regression, alternate feature generation, and classification. We cover the ELM architecture along with the ELM packages available in Julia and Python. We’ll conclude by comparing ELMs to multilayer perceptrons (MLPs).

          Artificial Intelligence for Robotics      Cache   Translate Page      

Bring a new degree of interconnectivity to your world by building your own intelligent robots Key Features Leverage fundamentals of AI and robotics Work through use cases to implement various machine learning algorithms Explore Natural Language Processing (NLP) concepts for efficient decision making in robots Book Description Artificial Intelligence for Robotics starts with an introduction to Robot Operating Systems (ROS), Python, robotic fundamentals, and the software and tools that are required to start out with robotics. You will learn robotics concepts that will be useful for making decisions, along with basic navigation skills. As you make your way through the chapters, you will learn about object recognition and genetic algorithms, which will teach your robot to identify and pick up an irregular object. With plenty of use cases throughout, you will explore natural language processing (NLP) and machine learning techniques to further enhance your robot. In the concluding chapters, you will learn about path planning and goal-oriented programming, which will help your robot prioritize tasks. By the end of this book, you will have learned to give your robot an artificial personality using simulated intelligence. What you will learn Get started with robotics and artificial intelligence Apply simulation techniques to give your robot an artificial personality Understand object recognition using neural networks and supervised learning techniques Pick up objects using genetic algorithms for manipulation Teach your robot to listen using NLP via an expert system Use machine learning and computer vision to teach your robot how to avoid obstacles Understand path planning, decision trees, and search algorithms in order to enhance your robot Who this book is for If you have basic knowledge about robotics and want to build or enhance your existing robot's intelligence, then Artificial Intelligence for Robotics is for you. This book is also for enthusiasts who want to gain knowledge of AI and robotics. Downloading the example code for this book You can download the example code files for all Packt books you have purchased from your account at If you purchased this book elsewhere, you can visit and register to have the files e-mailed directly to you.

          Extreme Learning Machines (ELMs) within Artificial Intelligence (AI)      Cache   Translate Page      

Learn all about the Extreme Learning Machine (ELM) Artificial Neural Network (ANN), and how you can best leverage it within Artificial Intelligence (AI) and data science projects including within regression, alternate feature generation, and classification. We cover the ELM architecture along with the ELM packages available in Julia and Python. We’ll conclude by comparing ELMs to multilayer perceptrons (MLPs).

          Audio Signal Processing Engineer - GuruLink - Montréal, QC      Cache   Translate Page      
Support Vector Machines, Hidden Markov Models, Deep Neural Network architectures); Audio Signal Processing Engineer....
From GuruLink - Tue, 04 Sep 2018 04:37:30 GMT - View all Montréal, QC jobs
          A Simple Method of Residential Electricity Load Forecasting by Improved Bayesian Neural Networks      Cache   Translate Page      
Electricity load forecasting is becoming one of the key issues to solve energy crisis problem, and time-series Bayesian Neural Network is one popular method used in load forecast models. However, it has long running time and relatively strong dependence on time and weather factors at a residential level. To solve these problems, this article presents an improved Bayesian Neural Networks (IBNN) forecast model by augmenting historical load data as inputs based on simple feedforward structure. From the load time delays correlations and impact factors analysis, containing different inputs, number of hidden neurons, historic period of data, forecasting time range, and range requirement of sample data, some advices are given on how to better choose these factors. To validate the performance of improved Bayesian Neural Networks model, several residential sample datasets of one whole year from Ausgrid have been selected to build the improved Bayesian Neural Networks model. The results compared with the time-series load forecast model show that the improved Bayesian Neural Networks model can significantly reduce calculating time by more than 30 times and even when the time or meteorological factors are missing, it can still predict the load with a high accuracy. Compared with other widely used prediction methods, the IBNN also performs a better accuracy and relatively shorter computing time. This improved Bayesian Neural Networks forecasting method can be applied in residential energy management.
          Linking genes, circuits, and behavior: network connectivity as a novel endophenotype of externalizing.      Cache   Translate Page      
Related Articles

Linking genes, circuits, and behavior: network connectivity as a novel endophenotype of externalizing.

Psychol Med. 2018 Sep 12;:1-9

Authors: Sadeh N, Spielberg JM, Logue MW, Hayes JP, Wolf EJ, McGlinchey RE, Milberg WP, Schichman SA, Stone A, Miller MW

BACKGROUND: Externalizing disorders are known to be partly heritable, but the biological pathways linking genetic risk to the manifestation of these costly behaviors remain under investigation. This study sought to identify neural phenotypes associated with genomic vulnerability for externalizing disorders.
METHODS: One-hundred fifty-five White, non-Hispanic veterans were genotyped using a genome-wide array and underwent resting-state functional magnetic resonance imaging. Genetic susceptibility was assessed using an independently developed polygenic score (PS) for externalizing, and functional neural networks were identified using graph theory based network analysis. Tasks of inhibitory control and psychiatric diagnosis (alcohol/substance use disorders) were used to measure externalizing phenotypes.
RESULTS: A polygenic externalizing disorder score (PS) predicted connectivity in a brain circuit (10 nodes, nine links) centered on left amygdala that included several cortical [bilateral inferior frontal gyrus (IFG) pars triangularis, left rostral anterior cingulate cortex (rACC)] and subcortical (bilateral amygdala, hippocampus, and striatum) regions. Directional analyses revealed that bilateral amygdala influenced left prefrontal cortex (IFG) in participants scoring higher on the externalizing PS, whereas the opposite direction of influence was observed for those scoring lower on the PS. Polygenic variation was also associated with higher Participation Coefficient for bilateral amygdala and left rACC, suggesting that genes related to externalizing modulated the extent to which these nodes functioned as communication hubs.
CONCLUSIONS: Findings suggest that externalizing polygenic risk is associated with disrupted connectivity in a neural network implicated in emotion regulation, impulse control, and reinforcement learning. Results provide evidence that this network represents a genetically associated neurobiological vulnerability for externalizing disorders.

PMID: 30207258 [PubMed - as supplied by publisher]

          DNN Dataflow Choice Is Overrated. (arXiv:1809.04070v1 [cs.DC])      Cache   Translate Page      

Authors: Xuan Yang, Mingyu Gao, Jing Pu, Ankita Nayak, Qiaoyi Liu, Steven Emberton Bell, Jeff Ou Setter, Kaidi Cao, Heonjae Ha, Christos Kozyrakis, Mark Horowitz

Many DNN accelerators have been proposed and built using different microarchitectures and program mappings. To fairly compare these different approaches, we modified the Halide compiler to produce hardware as well as CPU and GPU code, and show that Halide's existing scheduling language has enough power to represent all existing dense DNN accelerators. Using this system we can show that the specific dataflow chosen for the accelerator is not critical to achieve good efficiency: many different dataflows yield similar energy efficiency with good performance. However, finding the best blocking and resource allocation is critical, and we achieve a 2.6X energy savings over Eyeriss system by reducing the size of the local register file. Adding an additional level in the memory hierarchy saves an additional 25%. Based on these observations, we develop an optimizer that automatically finds the optimal blocking and storage hierarchy. Compared with Eyeriss system, it achieves up to 4.2X energy improvement for Convolutional Neural Networks (CNNs), 1.6X and 1.8X improvement for Long Short-Term Memories (LSTMs) and multi-layer perceptrons (MLPs) respectively.

          On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions. (arXiv:1809.04098v1 [cs.CV])      Cache   Translate Page      

Authors: Yusuke Tsuzuku, Issei Sato

Data-agnostic quasi-imperceptible perturbations on inputs can severely degrade recognition accuracy of deep convolutional networks. This indicates some structural instability of their predictions and poses a potential security threat. However, characterization of the shared directions of such harmful perturbations remains unknown if they exist, which makes it difficult to address the security threat and performance degradation. Our primal finding is that convolutional networks are sensitive to the directions of Fourier basis functions. We derived the property by specializing a hypothesis of the cause of the sensitivity, known as the linearity of neural networks, to convolutional networks and empirically validated it. As a by-product of the analysis, we propose a fast algorithm to create shift-invariant universal adversarial perturbations available in black-box settings.

          The next phase: Using neural networks to identify gas-phase molecules      Cache   Translate Page      
(DOE/Argonne National Laboratory) Argonne scientists have developed a neural network that can identify the structure of molecules in the gas phase, offering a novel technique for national security and pharmaceutical applications.
          Comment on Rescaling Data for Machine Learning in Python with Scikit-Learn by vivek      Cache   Translate Page      
If i have target values in different range for prediction using regression with deep neural network will it be helpful to get better accuracy by doing normalization of target values?If yes, then which technique i should use for that.
          Open Sourcing TonY: Native Support of TensorFlow on Hadoop      Cache   Translate Page      
Co-authors: Jonathan Hung, Keqiu Hu, and Anthony Hsu LinkedIn heavily relies on artificial intelligence to deliver content and create economic opportunities for its 575+ million members. Following recent rapid advances of deep learning technologies, our AI engineers have started adopting deep neural networks in LinkedIn’s relevance-driven products, including feeds and smart-replies. Many of these use cases are built on TensorFlow, a popular deep learning framework written by Google. In the beginning, our internal TensorFlow users ran the framework on small and unmanaged “bare metal” […]
          Nvidia Tesla T4 brings Turing smarts to AI inferencing      Cache   Translate Page      

At his GTC Japan keynote, Nvidia CEO Jensen Huang noted that AI inferencing—or the use of trained neural network models—is set to become a $20-billion market over the next five years. More and more applications are going to demand services like natural language processing, translation, image and video searches, and AI-driven recommendation, according to Nvidia. To power that future, the company is putting the Turing architecture in data centers using the Tesla T4 inferencing card and letting models run on those cards with the TensorRT Hyperscale Platform.

The Tesla ...


          QSAR Study of (5-Nitroheteroaryl-1,3,4-Thiadiazole-2-yl) Piperazinyl Derivatives to Predict New Similar Compounds as Antileishmanial Agents      Cache   Translate Page      
To search for newer and potent antileishmanial drugs, a series of 36 compounds of 5-(5-nitroheteroaryl-2-yl)-1,3,4-thiadiazole derivatives were subjected to a quantitative structure-activity relationship (QSAR) analysis for studying, interpreting, and predicting activities and designing new compounds using several statistical tools. The multiple linear regression (MLR), nonlinear regression (RNLM), and artificial neural network (ANN) models were developed using 30 molecules having pIC50 ranging from 3.155 to 5.046. The best generated MLR, RNLM, and ANN models show conventional correlation coefficients R of 0.750, 0.782, and 0.967 as well as their leave-one-out cross-validation correlation coefficients of 0.722, 0.744, and 0.720, respectively. The predictive ability of those models was evaluated by the external validation using a test set of 6 molecules with predicted correlation coefficients of 0.840, 0.850, and 0.802, respectively. The applicability domains of MLR and MNLR transparent models were investigated using William’s plot to detect outliers and outsides compounds. We expect that this study would be of great help in lead optimization for early drug discovery of new similar compounds.
          Scientifique de données en Big Data - BelairDirect - Montréal, QC      Cache   Translate Page      
Maîtrise des techniques analytiques appliquées (clustering, decision trees, neural networks, SVM (support vector machines), collaborative filtering, k-nearest...
From belairdirect - Thu, 13 Sep 2018 14:41:28 GMT - View all Montréal, QC jobs
          Pain response in babies' brains controlled in 'similar way to adults'      Cache   Translate Page      
Researchers from the Department of Paediatrics and Wellcome Centre for Integrative Neuroimaging at the University of Oxford, UK, have identified the neural network that helps control babies' brain activity in response to pain in a similar way to adults.
          Chip controlling exoskeleton keeps patients' brains cool      Cache   Translate Page      
Scientists developed a model for predicting hand movement trajectories. The predictions relys on lineal model, not neural networks. It has the same accuracy of predictions, requires less memory and fewer computations, so the sensor keeps patient's brain cool. This technology could drive exoskeletons that would allow patients with impaired mobility to regain movement.
          Scientifique de données en Big Data - BelairDirect - Montréal, QC      Cache   Translate Page      
Maîtrise des techniques analytiques appliquées (clustering, decision trees, neural networks, SVM (support vector machines), collaborative filtering, k-nearest...
From belairdirect - Thu, 13 Sep 2018 14:41:28 GMT - View all Montréal, QC jobs
          Machine learning material properties from the periodic table using convolutional neural networks      Cache   Translate Page      
Chem. Sci., 2018, Accepted Manuscript
DOI: 10.1039/C8SC02648C, Edge Article
Open Access Open Access
Creative Commons Licence  This article is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported Licence.
Xiaolong Zheng, Peng Zheng, Ruizhi Zhang
In recent years, the convolutional neural network (CNN) has achieved great success in image recognition and shown powerful feature extraction ability. Here we show that CNNs can learn the inner...
The content of this RSS Feed (c) The Royal Society of Chemistry

          Artificial Intelligence for Robotics      Cache   Translate Page      

Bring a new degree of interconnectivity to your world by building your own intelligent robots Key Features Leverage fundamentals of AI and robotics Work through use cases to implement various machine learning algorithms Explore Natural Language Processing (NLP) concepts for efficient decision making in robots Book Description Artificial Intelligence for Robotics starts with an introduction to Robot Operating Systems (ROS), Python, robotic fundamentals, and the software and tools that are required to start out with robotics. You will learn robotics concepts that will be useful for making decisions, along with basic navigation skills. As you make your way through the chapters, you will learn about object recognition and genetic algorithms, which will teach your robot to identify and pick up an irregular object. With plenty of use cases throughout, you will explore natural language processing (NLP) and machine learning techniques to further enhance your robot. In the concluding chapters, you will learn about path planning and goal-oriented programming, which will help your robot prioritize tasks. By the end of this book, you will have learned to give your robot an artificial personality using simulated intelligence. What you will learn Get started with robotics and artificial intelligence Apply simulation techniques to give your robot an artificial personality Understand object recognition using neural networks and supervised learning techniques Pick up objects using genetic algorithms for manipulation Teach your robot to listen using NLP via an expert system Use machine learning and computer vision to teach your robot how to avoid obstacles Understand path planning, decision trees, and search algorithms in order to enhance your robot Who this book is for If you have basic knowledge about robotics and want to build or enhance your existing robot's intelligence, then Artificial Intelligence for Robotics is for you. This book is also for enthusiasts who want to gain knowledge of AI and robotics. Downloading the example code for this book You can download the example code files for all Packt books you have purchased from your account at If you purchased this book elsewhere, you can visit and register to have the files e-mailed directly to you.

          R Deep Learning Essentials      Cache   Translate Page      

Implement neural network models in R 3.5 using TensorFlow, Keras, and MXNet Key Features Use R 3.5 for building deep learning models for computer vision and text Apply deep learning techniques in cloud for large-scale processing Build, train, and optimize neural network models on a range of datasets Book Description Deep learning is a powerful subset of machine learning that is very successful in domains such as computer vision and natural language processing (NLP). This second edition of R Deep Learning Essentials will open the gates for you to enter the world of neural networks by building powerful deep learning models using the R ecosystem. This book will introduce you to the basic principles of deep learning and teach you to build a neural network model from scratch. As you make your way through the book, you will explore deep learning libraries, such as Keras, MXNet, and TensorFlow, and create interesting deep learning models for a variety of tasks and problems, including structured data, computer vision, text data, anomaly detection, and recommendation systems. You'll cover advanced topics, such as generative adversarial networks (GANs), transfer learning, and large-scale deep learning in the cloud. In the concluding chapters, you will learn about the theoretical concepts of deep learning projects, such as model optimization, overfitting, and data augmentation, together with other advanced topics. By the end of this book, you will be fully prepared and able to implement deep learning concepts in your research work or projects. What you will learn Build shallow neural network prediction models Prevent models from overfitting the data to improve generalizability Explore techniques for finding the best hyperparameters for deep learning models Create NLP models using Keras and TensorFlow in R Use deep learning for computer vision tasks Implement deep learning tasks, such as NLP, recommendation systems, and autoencoders Who this book is for This second edition of R Deep Learning Essentials is for aspiring data scientists, data analysts, machine learning developers, and deep learning enthusiasts who are well versed in machine learning concepts and are looking to explore the deep learning paradigm using R. Fundamental understanding of the R language is necessary to get the most out of this book. Downloading the example code for this book You can download the example code files for all Packt books you have purchased from your account at If you purchased this book elsewhere, you can visit and register to have the files e-mailed directly to you.

          Helping computers fill in the gaps between video frames      Cache   Translate Page      
(Massachusetts Institute of Technology) In a paper being presented at this week's European Conference on Computer Vision, MIT researchers describe an add-on module that helps artificial intelligence systems called convolutional neural networks, or CNNs, to fill in the gaps between video frames to greatly improve the network's activity recognition.
          BrainChip to Release Biologically Inspired Neurmorphic System-on-a-Chip for AI Acceleration      Cache   Translate Page      
BrainChip has announced the Akida Neuromorphic System-on-Chip (NSoC), the first production-volume artificial intelligence accelerator utilizing Spiking Neural Networks (SNN).

from All About Circuits News

          One-Shot Speaker Identification for a Service Robot using a CNN-based Generic Verifier. (arXiv:1809.04115v1 [eess.AS])      Cache   Translate Page      

Authors: Ivette Vélez (1), Caleb Rascon (1), Gibrán Fuentes-Pineda (1) ((1) Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas (IIMAS), Universidad Nacional Autónoma de México (UNAM), Mexico.)

In service robotics, there is an interest to identify the user by voice alone. However, in application scenarios where a service robot acts as a waiter or a store clerk, new users are expected to enter the environment frequently. Typically, speaker identification models need to be retrained when this occurs, which can take an impractical amount of time. In this paper, a new approach for speaker identification through verification has been developed using a Siamese Convolutional Neural Network architecture (SCNN), where it learns to generically verify if two audio signals are from the same speaker. By having an external database of recorded audio of the users, identification is carried out by verifying the speech input with each of its entries. If new users are encountered, it is only required to add their recorded audio to the external database to be able to be identified, without retraining. The system was evaluated in four different aspects: the performance of the verifier, the performance of the system as a classifier using clean audio, its speed, and its accuracy in real-life settings. Its performance in conjunction with its one-shot-learning capabilities, makes the proposed system a viable alternative for speaker identification for service robots.

          Taking a machine's perspective: Human deciphering of adversarial images. (arXiv:1809.04120v1 [cs.CV])      Cache   Translate Page      

Authors: Zhenglong Zhou, Chaz Firestone

How similar is the human mind to the sophisticated machine-learning systems that mirror its performance? Models of object categorization based on convolutional neural networks (CNNs) have achieved human-level benchmarks in assigning known labels to novel images. These advances support transformative technologies such as autonomous vehicles and machine diagnosis; beyond this, they also serve as candidate models for the visual system itself -- not only in their output but perhaps even in their underlying mechanisms and principles. However, unlike human vision, CNNs can be "fooled" by adversarial examples -- carefully crafted images that appear as nonsense patterns to humans but are recognized as familiar objects by machines, or that appear as one object to humans and a different object to machines. This seemingly extreme divergence between human and machine classification challenges the promise of these new advances, both as applied image-recognition systems and also as models of the human mind. Surprisingly, however, little work has empirically investigated human classification of such adversarial stimuli: Does human and machine performance fundamentally diverge? Or could humans decipher such images and predict the machine's preferred labels? Here, we show that human and machine classification of adversarial stimuli are robustly related: In seven experiments on five prominent and diverse adversarial imagesets, human subjects reliably identified the machine's chosen label over relevant foils. This pattern persisted for images with strong antecedent identities, and even for images described as "totally unrecognizable to human eyes". We suggest that human intuition may be a more reliable guide to machine (mis)classification than has typically been imagined, and we explore the consequences of this result for minds and machines alike.

          Cartesian Neural Network Constitutive Models for Data-driven Elasticity Imaging. (arXiv:1809.04121v1 [cs.LG])      Cache   Translate Page      

Authors: Cameron Hoerig, Jamshid Ghaboussi, Michael F. Insana

Elasticity images map biomechanical properties of soft tissues to aid in the detection and diagnosis of pathological states. In particular, quasi-static ultrasonic (US) elastography techniques use force-displacement measurements acquired during an US scan to parameterize the spatio-temporal stress-strain behavior. Current methods use a model-based inverse approach to estimate the parameters associated with a chosen constitutive model. However, model-based methods rely on simplifying assumptions of tissue biomechanical properties, often limiting elastography to imaging one or two linear-elastic parameters.

We previously described a data-driven method for building neural network constitutive models (NNCMs) that learn stress-strain relationships from force-displacement data. Using measurements acquired on gelatin phantoms, we demonstrated the ability of NNCMs to characterize linear-elastic mechanical properties without an initial model assumption and thus circumvent the mathematical constraints typically encountered in classic model-based approaches to the inverse problem. While successful, we were required to use a priori knowledge of the internal object shape to define the spatial distribution of regions exhibiting different material properties.

Here, we introduce Cartesian neural network constitutive models (CaNNCMs) that are capable of using data to model both linear-elastic mechanical properties and their distribution in space. We demonstrate the ability of CaNNCMs to capture arbitrary material property distributions using stress-strain data from simulated phantoms. Furthermore, we show that a trained CaNNCM can be used to reconstruct a Young's modulus image. CaNNCMs are an important step toward data-driven modeling and imaging the complex mechanical properties of soft tissues.

          JigsawNet: Shredded Image Reassembly using Convolutional Neural Network and Loop-based Composition. (arXiv:1809.04137v1 [cs.CV])      Cache   Translate Page      

Authors: Canyu Le, Xin Li

This paper proposes a novel algorithm to reassemble an arbitrarily shredded image to its original status. Existing reassembly pipelines commonly consist of a local matching stage and a global compositions stage. In the local stage, a key challenge in fragment reassembly is to reliably compute and identify correct pairwise matching, for which most existing algorithms use handcrafted features, and hence, cannot reliably handle complicated puzzles. We build a deep convolutional neural network to detect the compatibility of a pairwise stitching, and use it to prune computed pairwise matches. To improve the network efficiency and accuracy, we transfer the calculation of CNN to the stitching region and apply a boost training strategy. In the global composition stage, we modify the commonly adopted greedy edge selection strategies to two new loop closure based searching algorithms. Extensive experiments show that our algorithm significantly outperforms existing methods on solving various puzzles, especially those challenging ones with many fragment pieces.

          Heated-Up Softmax Embedding. (arXiv:1809.04157v1 [cs.LG])      Cache   Translate Page      

Authors: Xu Zhang, Felix Xinnan Yu, Svebor Karaman, Wei Zhang, Shih-Fu Chang

Metric learning aims at learning a distance which is consistent with the semantic meaning of the samples. The problem is generally solved by learning an embedding for each sample such that the embeddings of samples of the same category are compact while the embeddings of samples of different categories are spread-out in the feature space. We study the features extracted from the second last layer of a deep neural network based classifier trained with the cross entropy loss on top of the softmax layer. We show that training classifiers with different temperature values of softmax function leads to features with different levels of compactness. Leveraging these insights, we propose a "heating-up" strategy to train a classifier with increasing temperatures, leading the corresponding embeddings to achieve state-of-the-art performance on a variety of metric learning benchmarks.

          Leabra7: a Python package for modeling recurrent, biologically-realistic neural networks. (arXiv:1809.04166v1 [cs.NE])      Cache   Translate Page      

Authors: C. Daniel Greenidge, Noam Miller, Kenneth A. Norman

Emergent is a software package that uses the AdEx neural dynamics model and LEABRA learning algorithm to simulate and train arbitrary recurrent neural network architectures in a biologically-realistic manner. We present Leabra7, a complementary Python library that implements these same algorithms. Leabra7 is developed and distributed using modern software development principles, and integrates tightly with Python's scientific stack. We demonstrate recurrent Leabra7 networks using traditional pattern-association tasks and a standard machine learning task, classifying the IRIS dataset.

          Time Series Analysis of Clickstream Logs from Online Courses. (arXiv:1809.04177v1 [cs.HC])      Cache   Translate Page      

Authors: Yohan Jo, Keith Maki, Gaurav Tomar

Due to the rapidly rising popularity of Massive Open Online Courses (MOOCs), there is a growing demand for scalable automated support technologies for student learning. Transferring traditional educational resources to online contexts has become an increasingly relevant problem in recent years. For learning science theories to be applicable, educators need a way to identify learning behaviors of students which contribute to learning outcomes, and use them to design and provide personalized intervention support to the students. Click logs are an important source of information about students' learning behaviors, however current literature has limited understanding of how these behaviors are represented within click logs. In this project, we have exploited the temporal dynamics of student behaviors both to do behavior modeling via graphical modeling approaches and to do performance prediction via recurrent neural network approaches in order to first identify student behaviors and then use them to predict their final outcome in the course. Our experiments showed that the long short-term memory (LSTM) model is capable of learning long-term dependencies in a sequence and outperforms other strong baselines in the prediction task. Further, these sequential approaches to click log analysis can be successfully imported to other courses when used with results obtained from graphical model behavior modeling.

          What can linguistics and deep learning contribute to each other?. (arXiv:1809.04179v1 [cs.CL])      Cache   Translate Page      

Authors: Tal Linzen

Joe Pater's target article calls for greater interaction between neural network research and linguistics. I expand on this call and show how such interaction can benefit both fields. Linguists can contribute to research on neural networks for language technologies by clearly delineating the linguistic capabilities that can be expected of such systems, and by constructing controlled experimental paradigms that can determine whether those desiderata have been met. In the other direction, neural networks can benefit the scientific study of language by providing infrastructure for modeling human sentence processing and for evaluating the necessity of particular innate constraints on language acquisition.

          Iterative Segmentation from Limited Training Data: Applications to Congenital Heart Disease. (arXiv:1809.04182v1 [cs.CV])      Cache   Translate Page      

Authors: Danielle F. Pace, Adrian V. Dalca, Tom Brosch, Tal Geva, Andrew J. Powell, Jürgen Weese, Mehdi H. Moghari, Polina Golland

We propose a new iterative segmentation model which can be accurately learned from a small dataset. A common approach is to train a model to directly segment an image, requiring a large collection of manually annotated images to capture the anatomical variability in a cohort. In contrast, we develop a segmentation model that recursively evolves a segmentation in several steps, and implement it as a recurrent neural network. We learn model parameters by optimizing the interme- diate steps of the evolution in addition to the final segmentation. To this end, we train our segmentation propagation model by presenting incom- plete and/or inaccurate input segmentations paired with a recommended next step. Our work aims to alleviate challenges in segmenting heart structures from cardiac MRI for patients with congenital heart disease (CHD), which encompasses a range of morphological deformations and topological changes. We demonstrate the advantages of this approach on a dataset of 20 images from CHD patients, learning a model that accurately segments individual heart chambers and great vessels. Com- pared to direct segmentation, the iterative method yields more accurate segmentation for patients with the most severe CHD malformations.

          Searching for Efficient Multi-Scale Architectures for Dense Image Prediction. (arXiv:1809.04184v1 [cs.CV])      Cache   Translate Page      

Authors: Liang-Chieh Chen, Maxwell D. Collins, Yukun Zhu, George Papandreou, Barret Zoph, Florian Schroff, Hartwig Adam, Jonathon Shlens

The design of neural network architectures is an important component for achieving state-of-the-art performance with machine learning systems across a broad array of tasks. Much work has endeavored to design and build architectures automatically through clever construction of a search space paired with simple learning algorithms. Recent progress has demonstrated that such meta-learning methods may exceed scalable human-invented architectures on image classification tasks. An open question is the degree to which such methods may generalize to new domains. In this work we explore the construction of meta-learning techniques for dense image prediction focused on the tasks of scene parsing, person-part segmentation, and semantic image segmentation. Constructing viable search spaces in this domain is challenging because of the multi-scale representation of visual information and the necessity to operate on high resolution imagery. Based on a survey of techniques in dense image prediction, we construct a recursive search space and demonstrate that even with efficient random search, we can identify architectures that outperform human-invented architectures and achieve state-of-the-art performance on three dense prediction tasks including 82.7\% on Cityscapes (street scene parsing), 71.3\% on PASCAL-Person-Part (person-part segmentation), and 87.9\% on PASCAL VOC 2012 (semantic image segmentation). Additionally, the resulting architecture is more computationally efficient, requiring half the parameters and half the computational cost as previous state of the art systems.

          Layerwise Perturbation-Based Adversarial Training for Hard Drive Health Degree Prediction. (arXiv:1809.04188v1 [cs.LG])      Cache   Translate Page      

Authors: Jianguo Zhang, Ji Wang, Lifang He, Zhao Li, Philip S. Yu

With the development of cloud computing and big data, the reliability of data storage systems becomes increasingly important. Previous researchers have shown that machine learning algorithms based on SMART attributes are effective methods to predict hard drive failures. In this paper, we use SMART attributes to predict hard drive health degrees which are helpful for taking different fault tolerant actions in advance. Given the highly imbalanced SMART datasets, it is a nontrivial work to predict the health degree precisely. The proposed model would encounter overfitting and biased fitting problems if it is trained by the traditional methods. In order to resolve this problem, we propose two strategies to better utilize imbalanced data and improve performance. Firstly, we design a layerwise perturbation-based adversarial training method which can add perturbations to any layers of a neural network to improve the generalization of the network. Secondly, we extend the training method to the semi-supervised settings. Then, it is possible to utilize unlabeled data that have a potential of failure to further improve the performance of the model. Our extensive experiments on two real-world hard drive datasets demonstrate the superiority of the proposed schemes for both supervised and semi-supervised classification. The model trained by the proposed method can correctly predict the hard drive health status 5 and 15 days in advance. Finally, we verify the generality of the proposed training method in other similar anomaly detection tasks where the dataset is imbalanced. The results argue that the proposed methods are applicable to other domains.

          Multimodal neural pronunciation modeling for spoken languages with logographic origin. (arXiv:1809.04203v1 [cs.CL])      Cache   Translate Page      

Authors: Minh Nguyen, Gia H. Ngo, Nancy F. Chen

Graphemes of most languages encode pronunciation, though some are more explicit than others. Languages like Spanish have a straightforward mapping between its graphemes and phonemes, while this mapping is more convoluted for languages like English. Spoken languages such as Cantonese present even more challenges in pronunciation modeling: (1) they do not have a standard written form, (2) the closest graphemic origins are logographic Han characters, of which only a subset of these logographic characters implicitly encodes pronunciation. In this work, we propose a multimodal approach to predict the pronunciation of Cantonese logographic characters, using neural networks with a geometric representation of logographs and pronunciation of cognates in historically related languages. The proposed framework improves performance by 18.1% and 25.0% respective to unimodal and multimodal baselines.

          Temporal Pattern Attention for Multivariate Time Series Forecasting. (arXiv:1809.04206v1 [cs.LG])      Cache   Translate Page      

Authors: Shun-Yao Shih, Fan-Keng Sun, Hung-yi Lee

Forecasting multivariate time series data, such as prediction of electricity consumption, solar power production, and polyphonic piano pieces, has numerous valuable applications. However, complex and non-linear interdependencies between time steps and series complicate the task. To obtain accurate prediction, it is crucial to model long-term dependency in time series data, which can be achieved to some good extent by recurrent neural network (RNN) with attention mechanism. Typical attention mechanism reviews the information at each previous time step and selects the relevant information to help generate the outputs, but it fails to capture the temporal patterns across multiple time steps. In this paper, we propose to use a set of filters to extract time-invariant temporal patterns, which is similar to transforming time series data into its "frequency domain". Then we proposed a novel attention mechanism to select relevant time series, and use its "frequency domain" information for forecasting. We applied the proposed model on several real-world tasks and achieved the state-of-the-art performance in all of them with only one exception. We also show that to some degree the learned filters play the role of bases in discrete Fourier transform.

          Convolutional Neural Network Approach for EEG-based Emotion Recognition using Brain Connectivity and its Spatial Information. (arXiv:1809.04208v1 [cs.HC])      Cache   Translate Page      

Authors: Seong-Eun Moon, Soobeom Jang, Jong-Seok Lee

Emotion recognition based on electroencephalography (EEG) has received attention as a way to implement human-centric services. However, there is still much room for improvement, particularly in terms of the recognition accuracy. In this paper, we propose a novel deep learning approach using convolutional neural networks (CNNs) for EEG-based emotion recognition. In particular, we employ brain connectivity features that have not been used with deep learning models in previous studies, which can account for synchronous activations of different brain regions. In addition, we develop a method to effectively capture asymmetric brain activity patterns that are important for emotion recognition. Experimental results confirm the effectiveness of our approach.

          Attention based visual analysis for fast grasp planning with multi-fingered robotic hand. (arXiv:1809.04226v1 [cs.RO])      Cache   Translate Page      

Authors: Zhen Deng, Ge Gao, Simone Frintrop, Jianwei Zhang

We present an attention based visual analysis framework to compute grasp-relevant information in order to guide grasp planning using a multi-fingered robotic hand. Our approach uses a computational visual attention model to locate regions of interest in a scene, and uses a deep convolutional neural network to detect grasp type and point for a sub-region of the object presented in a region of interest. We demonstrate the proposed framework in object grasping tasks, in which the information generated from the proposed framework is used as prior information to guide the grasp planning. Results show that the proposed framework can not only speed up grasp planning with more stable configurations, but also is able to handle unknown objects. Furthermore, our framework can handle cluttered scenarios. A new Grasp Type Dataset (GTD) that considers 6 commonly used grasp types and covers 12 household objects is also presented.

          Ensemble of Convolutional Neural Networks for Automatic Grading of Diabetic Retinopathy and Macular Edema. (arXiv:1809.04228v1 [cs.CV])      Cache   Translate Page      

Authors: Avinash Kori, Sai Saketh Chennamsetty, Mohammed Safwan K.P., Varghese Alex

In this manuscript, we automate the procedure of grading of diabetic retinopathy and macular edema from fundus images using an ensemble of convolutional neural networks. The availability of limited amount of labeled data to perform supervised learning was circumvented by using transfer learning approach. The models in the ensemble were pre-trained on a large dataset comprising natural images and were later fine-tuned with the limited data for the task of choice. For an image, the ensemble of classifiers generate multiple predictions, and a max-voting based approach was utilized to attain the final grade of the anomaly in the image. For the task of grading DR, on the test data (n=56), the ensemble achieved an accuracy of 83.9\%, while for the task for grading macular edema the network achieved an accuracy of 95.45% (n=44).

          EEG-based video identification using graph signal modeling and graph convolutional neural network. (arXiv:1809.04229v1 [eess.SP])      Cache   Translate Page      

Authors: Soobeom Jang, Seong-Eun Moon, Jong-Seok Lee

This paper proposes a novel graph signal-based deep learning method for electroencephalography (EEG) and its application to EEG-based video identification. We present new methods to effectively represent EEG data as signals on graphs, and learn them using graph convolutional neural networks. Experimental results for video identification using EEG responses obtained while watching videos show the effectiveness of the proposed approach in comparison to existing methods. Effective schemes for graph signal representation of EEG are also discussed.

          Rapid Training of Very Large Ensembles of Diverse Neural Networks. (arXiv:1809.04270v1 [cs.LG])      Cache   Translate Page      

Authors: Abdul Wasay, Yuze Liao, Stratos Idreos

Ensembles of deep neural networks with diverse architectures significantly improve generalization accuracy. However, training such ensembles requires a large amount of computational resources and time as every network in the ensemble has to be separately trained. In practice, this restricts the number of different deep neural network architectures that can be included within an ensemble. We propose a new approach to address this problem. Our approach captures the structural similarity between members of a neural network ensemble and train it only once. Subsequently, this knowledge is transferred to all members of the ensemble using function-preserving transformations. Then, these ensemble networks converge significantly faster as compared to training from scratch. We show through experiments on CIFAR-10, CIFAR-100, and SVHN data sets that our approach can train large and diverse ensembles of deep neural networks achieving comparable accuracy to existing approaches in a fraction of their training time. In particular, our approach trains an ensemble of $100$ variants of deep neural networks with diverse architectures up to $6 \times$ faster as compared to existing approaches. This improvement in training cost grows linearly with the size of the ensemble.

          Transforming acoustic characteristics to deceive playback spoofing countermeasures of speaker verification systems. (arXiv:1809.04274v1 [cs.SD])      Cache   Translate Page      

Authors: Fuming Fang, Junichi Yamagishi, Isao Echizen, Md Sahidullah, Tomi Kinnunen

Automatic speaker verification (ASV) systems use a playback detector to filter out playback attacks and ensure verification reliability. Since current playback detection models are almost always trained using genuine and played-back speech, it may be possible to degrade their performance by transforming the acoustic characteristics of the played-back speech close to that of the genuine speech. One way to do this is to enhance speech "stolen" from the target speaker before playback. We tested the effectiveness of a playback attack using this method by using the speech enhancement generative adversarial network to transform acoustic characteristics. Experimental results showed that use of this "enhanced stolen speech" method significantly increases the equal error rates for the baseline used in the ASVspoof 2017 challenge and for a light convolutional neural network-based method. The results also showed that its use degrades the performance of a Gaussian mixture model-universal background model-based ASV system. This type of attack is thus an urgent problem needing to be solved.

          Chinese Poetry Generation with a Salient-Clue Mechanism. (arXiv:1809.04313v1 [cs.AI])      Cache   Translate Page      

Authors: Xiaoyuan Yi, Ruoyu Li, Maosong Sun

As a precious part of the human cultural heritage, Chinese poetry has influenced people for generations. Automatic poetry composition is a challenge for AI. In recent years, significant progress has been made in this area benefiting from the development of neural networks. However, the coherence in meaning, theme or even artistic conception for a generated poem as a whole still remains a big problem. In this paper, we propose a novel Salient-Clue mechanism for Chinese poetry generation. Different from previous work which tried to exploit all the context information, our model selects the most salient characters automatically from each so-far generated line to gradually form a salient clue, which is utilized to guide successive poem generation process so as to eliminate interruptions and improve coherence. Besides, our model can be flexibly extended to control the generated poem in different aspects, for example, poetry style, which further enhances the coherence. Experimental results show that our model is very effective, outperforming three strong baselines.

          Deep learning for time series classification: a review. (arXiv:1809.04356v1 [cs.LG])      Cache   Translate Page      

Authors: Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, Pierre-Alain Muller

Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state of the art performance for document classification and speech recognition. In this article, we study the current state of the art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.

          Training Deep Neural Networks with Different Datasets In-the-wild: The Emotion Recognition Paradigm. (arXiv:1809.04359v1 [cs.LG])      Cache   Translate Page      

Authors: Dimitrios Kollias, Stefanos Zafeiriou

A novel procedure is presented in this paper, for training a deep convolutional and recurrent neural network, taking into account both the available training data set and some information extracted from similar networks trained with other relevant data sets. This information is included in an extended loss function used for the network training, so that the network can have an improved performance when applied to the other data sets, without forgetting the learned knowledge from the original data set. Facial expression and emotion recognition in-the-wild is the test bed application that is used to demonstrate the improved performance achieved using the proposed approach. In this framework, we provide an experimental study on categorical emotion recognition using datasets from a very recent related emotion recognition challenge.

          NNCP: A citation count prediction methodology based on deep neural network learning techniques. (arXiv:1809.04365v1 [cs.DL])      Cache   Translate Page      

Authors: Ali Abrishami, Sadegh Aliakbary

With the growing number of published scientific papers world-wide, the need to evaluation and quality assessment methods for research papers is increasing. Scientific fields such as scientometrics, informetrics and bibliometrics establish quantified analysis methods and measurements for scientific papers. In this area, an important problem is to predict the future influence of a published paper. Particularly, early discrimination between influential papers and insignificant papers may find important applications. In this regard, one of the most important metrics is the number of citations to the paper, since this metric is widely utilized in the evaluation of scientific publications and moreover, it serves as the basis for many other metrics such as h-index. In this paper, we propose a novel method for predicting long-term citations of a paper based on the number of its citations in the first few years after publication. In order to train a citations prediction model, we employed artificial neural networks which is a powerful machine learning tool with recently growing applications in many domains including image and text processing. The empirical experiments show that our proposed method out-performs state-of-the-art methods with respect to the prediction accuracy in both yearly and total prediction of the number of citations.

          Bayesian Semi-supervised Learning with Graph Gaussian Processes. (arXiv:1809.04379v1 [cs.LG])      Cache   Translate Page      

Authors: Yin Cheng Ng, Ricardo Silva

We propose a data-efficient Gaussian process-based Bayesian approach to the semi-supervised learning problem on graphs. The proposed model shows extremely competitive performance when compared to the state-of-the-art graph neural networks on semi-supervised learning benchmark experiments, and outperforms the neural networks in active learning experiments where labels are scarce. Furthermore, the model does not require a validation data set for early stopping to control over-fitting. Our model can be viewed as an instance of empirical distribution regression weighted locally by network connectivity. We further motivate the intuitive construction of the model with a Bayesian linear model interpretation where the node features are filtered by an operator related to the graph Laplacian. The method can be easily implemented by adapting off-the-shelf scalable variational inference algorithms for Gaussian processes.

          Label Denoising with Large Ensembles of Heterogeneous Neural Networks. (arXiv:1809.04403v1 [cs.CV])      Cache   Translate Page      

Authors: Pavel Ostyakov, Elizaveta Logacheva, Roman Suvorov, Vladimir Aliev, Gleb Sterkin, Oleg Khomenko, Sergey I. Nikolenko

Despite recent advances in computer vision based on various convolutional architectures, video understanding remains an important challenge. In this work, we present and discuss a top solution for the large-scale video classification (labeling) problem introduced as a Kaggle competition based on the YouTube-8M dataset. We show and compare different approaches to preprocessing, data augmentation, model architectures, and model combination. Our final model is based on a large ensemble of video- and frame-level models but fits into rather limiting hardware constraints. We apply an approach based on knowledge distillation to deal with noisy labels in the original dataset and the recently developed mixup technique to improve the basic models.

          Re-purposing Compact Neuronal Circuit Policies to Govern Reinforcement Learning Tasks. (arXiv:1809.04423v1 [cs.LG])      Cache   Translate Page      

Authors: Ramin M. Hasani, Mathias Lechner, Alexander Amini, Daniela Rus, Radu Grosu

We propose an effective method for creating interpretable control agents, by \textit{re-purposing} the function of a biological neural circuit model, to govern simulated and real world reinforcement learning (RL) test-beds. Inspired by the structure of the nervous system of the soil-worm, \emph{C. elegans}, we introduce \emph{Neuronal Circuit Policies} (NCPs) as a novel recurrent neural network instance with liquid time-constants, universal approximation capabilities and interpretable dynamics. We theoretically show that they can approximate any finite simulation time of a given continuous n-dimensional dynamical system, with $n$ output units and some hidden units. We model instances of the policies and learn their synaptic and neuronal parameters to control standard RL tasks and demonstrate its application for autonomous parking of a real rover robot on a pre-defined trajectory. For reconfiguration of the \emph{purpose} of the neural circuit, we adopt a search-based RL algorithm. We show that our neuronal circuit policies perform as good as deep neural network policies with the advantage of realizing interpretable dynamics at the cell-level. We theoretically find bounds for the time-varying dynamics of the circuits, and introduce a novel way to reason about networks' dynamics.

          Real-time Multiple People Tracking with Deeply Learned Candidate Selection and Person Re-Identification. (arXiv:1809.04427v1 [cs.CV])      Cache   Translate Page      

Authors: Long Chen, Haizhou Ai, Zijie Zhuang, Chong Shang

Online multi-object tracking is a fundamental problem in time-critical video analysis applications. A major challenge in the popular tracking-by-detection framework is how to associate unreliable detection results with existing tracks. In this paper, we propose to handle unreliable detection by collecting candidates from outputs of both detection and tracking. The intuition behind generating redundant candidates is that detection and tracks can complement each other in different scenarios. Detection results of high confidence prevent tracking drifts in the long term, and predictions of tracks can handle noisy detection caused by occlusion. In order to apply optimal selection from a considerable amount of candidates in real-time, we present a novel scoring function based on a fully convolutional neural network, that shares most computations on the entire image. Moreover, we adopt a deeply learned appearance representation, which is trained on large-scale person re-identification datasets, to improve the identification ability of our tracker. Extensive experiments show that our tracker achieves real-time and state-of-the-art performance on a widely used people tracking benchmark.

          Frame-level speaker embeddings for text-independent speaker recognition and analysis of end-to-end model. (arXiv:1809.04437v1 [eess.AS])      Cache   Translate Page      

Authors: Suwon Shon, Hao Tang, James Glass

In this paper, we propose a Convolutional Neural Network (CNN) based speaker recognition model for extracting robust speaker embeddings. The embedding can be extracted efficiently with linear activation in the embedding layer. To understand how the speaker recognition model operates with text-independent input, we modify the structure to extract frame-level speaker embeddings from each hidden layer. We feed utterances from the TIMIT dataset to the trained network and use several proxy tasks to study the networks ability to represent speech input and differentiate voice identity. We found that the networks are better at discriminating broad phonetic classes than individual phonemes. In particular, frame-level embeddings that belong to the same phonetic classes are similar (based on cosine distance) for the same speaker. The frame level representation also allows us to analyze the networks at the frame level, and has the potential for other analyses to improve speaker recognition.

          Convolutional Neural Networks for Fast Approximation of Graph Edit Distance. (arXiv:1809.04440v1 [cs.LG])      Cache   Translate Page      

Authors: Yunsheng Bai, Hao Ding, Yizhou Sun, Wei Wang

Graph Edit Distance (GED) computation is a core operation of many widely-used graph applications, such as graph classification, graph matching, and graph similarity search. However, computing the exact GED between two graphs is NP-complete. Most current approximate algorithms are based on solving a combinatorial optimization problem, which involves complicated design and high time complexity. In this paper, we propose a novel end-to-end neural network based approach to GED approximation, aiming to alleviate the computational burden while preserving good performance. The proposed approach, named GSimCNN, turns GED computation into a learning problem. Each graph is considered as a set of nodes, represented by learnable embedding vectors. The GED computation is then considered as a two-set matching problem, where a higher matching score leads to a lower GED. A Convolutional Neural Network (CNN) based approach is proposed to tackle the set matching problem. We test our algorithm on three real graph datasets, and our model achieves significant performance enhancement against state-of-the-art approximate GED computation algorithms.

          An empirical learning-based validation procedure for simulation workflow. (arXiv:1809.04441v1 [cs.LG])      Cache   Translate Page      

Authors: Zhuqing Liu, Liyuanjun Lai, Lin Zhang

Simulation workflow is a top-level model for the design and control of simulation process. It connects multiple simulation components with time and interaction restrictions to form a complete simulation system. Before the construction and evaluation of the component models, the validation of upper-layer simulation workflow is of the most importance in a simulation system. However, the methods especially for validating simulation workflow is very limit. Many of the existing validation techniques are domain-dependent with cumbersome questionnaire design and expert scoring. Therefore, this paper present an empirical learning-based validation procedure to implement a semi-automated evaluation for simulation workflow. First, representative features of general simulation workflow and their relations with validation indices are proposed. The calculation process of workflow credibility based on Analytic Hierarchy Process (AHP) is then introduced. In order to make full use of the historical data and implement more efficient validation, four learning algorithms, including back propagation neural network (BPNN), extreme learning machine (ELM), evolving new-neuron (eNFN) and fast incremental gaussian mixture model (FIGMN), are introduced for constructing the empirical relation between the workflow credibility and its features. A case study on a landing-process simulation workflow is established to test the feasibility of the proposed procedure. The experimental results also provide some useful overview of the state-of-the-art learning algorithms on the credibility evaluation of simulation models.

          DeepProteomics: Protein family classification using Shallow and Deep Networks. (arXiv:1809.04461v1 [q-bio.QM])      Cache   Translate Page      

Authors: Anu Vazhayil, Vinayakumar R, Soman KP

The knowledge regarding the function of proteins is necessary as it gives a clear picture of biological processes. Nevertheless, there are many protein sequences found and added to the databases but lacks functional annotation. The laboratory experiments take a considerable amount of time for annotation of the sequences. This arises the need to use computational techniques to classify proteins based on their functions. In our work, we have collected the data from Swiss-Prot containing 40433 proteins which is grouped into 30 families. We pass it to recurrent neural network(RNN), long short term memory(LSTM) and gated recurrent unit(GRU) model and compare it by applying trigram with deep neural network and shallow neural network on the same dataset. Through this approach, we could achieve maximum of around 78% accuracy for the classification of protein families.

          Multi range Real-time depth inference from a monocular stabilized footage using a Fully Convolutional Neural Network. (arXiv:1809.04467v1 [cs.CV])      Cache   Translate Page      

Authors: Clément Pinard, Laure Chevalley, Antoine Manzanera, David Filliat

Using a neural network architecture for depth map inference from monocular stabilized videos with application to UAV videos in rigid scenes, we propose a multi-range architecture for unconstrained UAV flight, leveraging flight data from sensors to make accurate depth maps for uncluttered outdoor environment. We try our algorithm on both synthetic scenes and real UAV flight data. Quantitative results are given for synthetic scenes with a slightly noisy orientation, and show that our multi-range architecture improves depth inference. Along with this article is a video that present our results more thoroughly.

          Learning structure-from-motionfrom motion. (arXiv:1809.04471v1 [cs.CV])      Cache   Translate Page      

Authors: Clément Pinard, Laure Chevalley, Antoine Manzanera, David Filliat

This work is based on a questioning of the quality metrics used by deep neural networks performing depth prediction from a single image, and then of the usability of recently published works on unsupervised learning of depth from videos. To overcome their limitations, we propose to learn in the same unsupervised manner a depth map inference system from monocular videos that takes a pair of images as input. This algorithm actually learns structure-from-motion from motion, and not only structure from context appearance. The scale factor issue is explicitly treated, and the absolute depth map can be estimated from camera displacement magnitude, which can be easily measured from cheap external sensors. Our solution is also much more robust with respect to domain variation and adaptation via fine tuning, because it does not rely entirely in depth from context. Two use cases are considered, unstabilized moving camera videos, and stabilized ones. This choice is motivated by the UAV (for Unmanned Aerial Vehicle) use case that generally provides reliable orientation measurement. We provide a set of experiments showing that, used in real conditions where only speed can be known, our network outperforms competitors for most depth quality measures. Results are given on the well known KITTI dataset, which provides robust stabilization for our second use case, but also contains moving scenes which are very typical of the in-car road context. We then present results on a synthetic dataset that we believe to be more representative of typical UAV scenes. Lastly, we present two domain adaptation use cases showing superior robustness of our method compared to single view depth algorithms, which indicates that it is better suited for highly variable visual contexts.

          Investigating the generalizability of EEG-based Cognitive Load Estimation Across Visualizations. (arXiv:1809.04507v1 [cs.HC])      Cache   Translate Page      

Authors: Viral Parekh, Maneesh Bilalpur, Sharavan Kumar, Stefan Winkler, C V Jawahar, Ramanathan Subramanian

We examine if EEG-based cognitive load (CL) estimation is generalizable across the character, spatial pattern, bar graph and pie chart-based visualizations for the nback~task. CL is estimated via two recent approaches: (a) Deep convolutional neural network, and (b) Proximal support vector machines. Experiments reveal that CL estimation suffers across visualizations motivating the need for effective machine learning techniques to benchmark visual interface usability for a given analytic task.

          Joint Sub-bands Learning with Clique Structures for Wavelet Domain Super-Resolution. (arXiv:1809.04508v1 [cs.CV])      Cache   Translate Page      

Authors: Zhisheng Zhong, Tiancheng Shen, Yibo Yang, Zhouchen Lin, Chao Zhang

Convolutional neural networks (CNNs) have recently achieved great success in single-image super-resolution (SISR). However, these methods tend to produce over-smoothed outputs and miss some textural details. To solve these problems, we propose the Super-Resolution CliqueNet (SRCliqueNet) to reconstruct the high resolution (HR) image with better textural details in the wavelet domain. The proposed SRCliqueNet firstly extracts a set of feature maps from the low resolution (LR) image by the clique blocks group. Then we send the set of feature maps to the clique up-sampling module to reconstruct the HR image. The clique up-sampling module consists of four sub-nets which predict the high resolution wavelet coefficients of four sub-bands. Since we consider the edge feature properties of four sub-bands, the four sub-nets are connected to the others so that they can learn the coefficients of four sub-bands jointly. Finally we apply inverse discrete wavelet transform (IDWT) to the output of four sub-nets at the end of the clique up-sampling module to increase the resolution and reconstruct the HR image. Extensive quantitative and qualitative experiments on benchmark datasets show that our method achieves superior performance over the state-of-the-art methods.

          Genetic algorithms with DNN-based trainable crossover as an example of partial specialization of general search. (arXiv:1809.04520v1 [cs.NE])      Cache   Translate Page      

Authors: Alexey Potapov, Sergey Rodionov

Universal induction relies on some general search procedure that is doomed to be inefficient. One possibility to achieve both generality and efficiency is to specialize this procedure w.r.t. any given narrow task. However, complete specialization that implies direct mapping from the task parameters to solutions (discriminative models) without search is not always possible. In this paper, partial specialization of general search is considered in the form of genetic algorithms (GAs) with a specialized crossover operator. We perform a feasibility study of this idea implementing such an operator in the form of a deep feedforward neural network. GAs with trainable crossover operators are compared with the result of complete specialization, which is also represented as a deep neural network. Experimental results show that specialized GAs can be more efficient than both general GAs and discriminative models.

          Using the Tsetlin Machine to Learn Human-Interpretable Rules for High-Accuracy Text Categorization with Medical Applications. (arXiv:1809.04547v1 [cs.LG])      Cache   Translate Page      

Authors: Geir Thore Berge, Ole-Christoffer Granmo, Tor Oddbjørn Tveit, Morten Goodwin, Lei Jiao, Bernt Viggo Matheussen

Medical applications challenge today's text categorization techniques by demanding both high accuracy and ease-of-interpretation. Although deep learning has provided a leap ahead in accuracy, this leap comes at the sacrifice of interpretability. To address this accuracy-interpretability challenge, we here introduce, for the first time, a text categorization approach that leverages the recently introduced Tsetlin Machine. In all brevity, we represent the terms of a text as propositional variables. From these, we capture categories using simple propositional formulae, such as: if "rash" and "reaction" and "penicillin" then Allergy. The Tsetlin Machine learns these formulae from a labelled text, utilizing conjunctive clauses to represent the particular facets of each category. Indeed, even the absence of terms (negated features) can be used for categorization purposes. Our empirical results are quite conclusive. The Tsetlin Machine either performs on par with or outperforms all of the evaluated methods on both the 20 Newsgroups and IMDb datasets, as well as on a non-public clinical dataset. On average, the Tsetlin Machine delivers the best recall and precision scores across the datasets. The GPU implementation of the Tsetlin Machine is further 8 times faster than the GPU implementation of the neural network. We thus believe that our novel approach can have a significant impact on a wide range of text analysis applications, forming a promising starting point for deeper natural language understanding with the Tsetlin Machine.

          Human Driving Skill Modeling Using Neural Networks for Haptic Assistance in Realistic Virtual Environments. (arXiv:1809.04549v1 [cs.HC])      Cache   Translate Page      

Authors: Hojin Lee, Hyoungkyun Kim, Seungmoon Choi

This work addresses our research on driving skill modeling using artificial neural networks for haptic assistance. In this paper, we present a haptic driving training simulator with performance-based, error-corrective haptic feedback. One key component of our simulator is the ability to learn an optimized driving skill model from the driving data of expert drivers. To this end, we obtain a model utilizing artificial neural networks to extract a desired movement of a steering wheel and an accelerator pedal based on the experts' prediction. Then, we can deliver haptic assistance based on a driver's performance error which is a difference between a current and the desired movement. We validate the performance of our framework in two respective user experiments recruiting expert/novice drivers to show the feasibility and applicability of facilitating neural networks for performance-based haptic driving skill transfer.

          End-to-end Audiovisual Speech Activity Detection with Bimodal Recurrent Neural Models. (arXiv:1809.04553v1 [cs.CL])      Cache   Translate Page      

Authors: Fei Tao, Carlos Busso

Speech activity detection (SAD) plays an important role in current speech processing systems, including automatic speech recognition (ASR). SAD is particularly difficult in environments with acoustic noise. A practical solution is to incorporate visual information, increasing the robustness of the SAD approach. An audiovisual system has the advantage of being robust to different speech modes (e.g., whisper speech) or background noise. Recent advances in audiovisual speech processing using deep learning have opened opportunities to capture in a principled way the temporal relationships between acoustic and visual features. This study explores this idea proposing a \emph{bimodal recurrent neural network} (BRNN) framework for SAD. The approach models the temporal dynamic of the sequential audiovisual data, improving the accuracy and robustness of the proposed SAD system. Instead of estimating hand-crafted features, the study investigates an end-to-end training approach, where acoustic and visual features are directly learned from the raw data during training. The experimental evaluation considers a large audiovisual corpus with over 60.8 hours of recordings, collected from 105 speakers. The results demonstrate that the proposed framework leads to absolute improvements up to 1.2% under practical scenarios over a VAD baseline using only audio implemented with deep neural network (DNN). The proposed approach achieves 92.7% F1-score when it is evaluated using the sensors from a portable tablet under noisy acoustic environment, which is only 1.0% lower than the performance obtained under ideal conditions (e.g., clean speech obtained with a high definition camera and a close-talking microphone).

          Solving Sinhala Language Arithmetic Problems using Neural Networks. (arXiv:1809.04557v1 [cs.CL])      Cache   Translate Page      

Authors: W.M.T Chathurika, K.C.E De Silva, A.M. Raddella, E.M.R.S. Ekanayake, A. Nugaliyadde, Y. Mallawarachchi

A methodology is presented to solve Arithmetic problems in Sinhala Language using a Neural Network. The system comprises of (a) keyword identification, (b) question identification, (c) mathematical operation identification and is combined using a neural network. Naive Bayes Classification is used in order to identify keywords and Conditional Random Field to identify the question and the operation which should be performed on the identified keywords to achieve the expected result. "One vs. all Classification" is done using a neural network for sentences. All functions are combined through the neural network which builds an equation to solve the problem. The paper compares each methodology in ARIS and Mahoshadha to the method presented in the paper. Mahoshadha2 learns to solve arithmetic problems with the accuracy of 76%.

          FINN-R: An End-to-End Deep-Learning Framework for Fast Exploration of Quantized Neural Networks. (arXiv:1809.04570v1 [cs.AR])      Cache   Translate Page      

Authors: Michaela Blott, Thomas Preusser, Nicholas Fraser, Giulio Gambardella, Kenneth O'Brien, Yaman Umuroglu

Convolutional Neural Networks have rapidly become the most successful machine learning algorithm, enabling ubiquitous machine vision and intelligent decisions on even embedded computing-systems. While the underlying arithmetic is structurally simple, compute and memory requirements are challenging. One of the promising opportunities is leveraging reduced-precision representations for inputs, activations and model parameters. The resulting scalability in performance, power efficiency and storage footprint provides interesting design compromises in exchange for a small reduction in accuracy. FPGAs are ideal for exploiting low-precision inference engines leveraging custom precisions to achieve the required numerical accuracy for a given application. In this article, we describe the second generation of the FINN framework, an end-to-end tool which enables design space exploration and automates the creation of fully customized inference engines on FPGAs. Given a neural network description, the tool optimizes for given platforms, design targets and a specific precision. We introduce formalizations of resource cost functions and performance predictions, and elaborate on the optimization algorithms. Finally, we evaluate a selection of reduced precision neural networks ranging from CIFAR-10 classifiers to YOLO-based object detection on a range of platforms including PYNQ and AWS\,F1, demonstrating new unprecedented measured throughput at 50TOp/s on AWS-F1 and 5TOp/s on embedded devices.

          Forecasting Across Time Series Databases using Recurrent Neural Networks on Groups of Similar Series: A Clustering Approach. (arXiv:1710.03222v2 [cs.LG] UPDATED)      Cache   Translate Page      

Authors: Kasun Bandara, Christoph Bergmeir, Slawek Smyl

With the advent of Big Data, nowadays in many applications databases containing large quantities of similar time series are available. Forecasting time series in these domains with traditional univariate forecasting procedures leaves great potentials for producing accurate forecasts untapped. Recurrent neural networks (RNNs), and in particular Long Short-Term Memory (LSTM) networks, have proven recently that they are able to outperform state-of-the-art univariate time series forecasting methods in this context when trained across all available time series. However, if the time series database is heterogeneous, accuracy may degenerate, so that on the way towards fully automatic forecasting methods in this space, a notion of similarity between the time series needs to be built into the methods. To this end, we present a prediction model that can be used with different types of RNN models on subgroups of similar time series, which are identified by time series clustering techniques. We assess our proposed methodology using LSTM networks, a widely popular RNN variant. Our method achieves competitive results on benchmarking datasets under competition evaluation procedures. In particular, in terms of mean sMAPE accuracy, it consistently outperforms the baseline LSTM model and outperforms all other methods on the CIF2016 forecasting competition dataset.

          Layer-wise Learning of Stochastic Neural Networks with Information Bottleneck. (arXiv:1712.01272v4 [cs.LG] UPDATED)      Cache   Translate Page      

Authors: Thanh T. Nguyen, Jaesik Choi

Deep neural networks (DNNs) offer flexible modeling capability for various important machine learning problems. Given the same neural modeling capability, the success of DNNs is attributed to how effectively we could learn the networks. Currently, the maximum likelihood estimate (MLE) principle has been a de-facto standard for learning DNNs. However, the MLE principle is not explicitly tailored to the hierarchical structure of DNNs. In this work, we propose the Parametric Information Bottleneck (PIB) framework as a fully information-theoretic learning principle of DNNs. Motivated by the Information Bottleneck principle, our framework efficiently induces relevant information under compression constraint into each layer of DNNs via multi-objective learning. Consequently, PIB generalizes the MLE principle in DNNs, indeed empirically exploits the neural representations better than MLE and a partially information-theoretic treatment, and offers better generalization and adversarial robustness on MNIST and CIFAR10.

          Predicting Hurricane Trajectories using a Recurrent Neural Network. (arXiv:1802.02548v3 [cs.LG] UPDATED)      Cache   Translate Page      

Authors: Sheila Alemany, Jonathan Beltran, Adrian Perez, Sam Ganzfried

Hurricanes are cyclones circulating about a defined center whose closed wind speeds exceed 75 mph originating over tropical and subtropical waters. At landfall, hurricanes can result in severe disasters. The accuracy of predicting their trajectory paths is critical to reduce economic loss and save human lives. Given the complexity and nonlinearity of weather data, a recurrent neural network (RNN) could be beneficial in modeling hurricane behavior. We propose the application of a fully connected RNN to predict the trajectory of hurricanes. We employed the RNN over a fine grid to reduce typical truncation errors. We utilized their latitude, longitude, wind speed, and pressure publicly provided by the National Hurricane Center (NHC) to predict the trajectory of a hurricane at 6-hour intervals. Results show that this proposed technique is competitive to methods currently employed by the NHC and can predict up to approximately 120 hours of hurricane path.

          CNN+LSTM Architecture for Speech Emotion Recognition with Data Augmentation. (arXiv:1802.05630v2 [cs.SD] UPDATED)      Cache   Translate Page      

Authors: Caroline Etienne, Guillaume Fidanza, Andrei Petrovskii, Laurence Devillers, Benoit Schmauch

In this work we design a neural network for recognizing emotions in speech, using the IEMOCAP dataset. Following the latest advances in audio analysis, we use an architecture involving both convolutional layers, for extracting high-level features from raw spectrograms, and recurrent ones for aggregating long-term dependencies. We examine the techniques of data augmentation with vocal track length perturbation, layer-wise optimizer adjustment, batch normalization of recurrent layers and obtain highly competitive results of 64.5% for weighted accuracy and 61.7% for unweighted accuracy on four emotions.

          The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches. (arXiv:1803.01164v2 [cs.CV] UPDATED)      Cache   Translate Page      

Authors: Md Zahangir Alom, Tarek M. Taha, Christopher Yakopcic, Stefan Westberg, Paheding Sidike, Mst Shamima Nasrin, Brian C Van Esesn, Abdul A S. Awwal, Vijayan K. Asari

Deep learning has demonstrated tremendous success in variety of application domains in the past few years. This new field of machine learning has been growing rapidly and applied in most of the application domains with some new modalities of applications, which helps to open new opportunity. There are different methods have been proposed on different category of learning approaches, which includes supervised, semi-supervised and un-supervised learning. The experimental results show state-of-the-art performance of deep learning over traditional machine learning approaches in the field of Image Processing, Computer Vision, Speech Recognition, Machine Translation, Art, Medical imaging, Medical information processing, Robotics and control, Bio-informatics, Natural Language Processing (NLP), Cyber security, and many more. This report presents a brief survey on development of DL approaches, including Deep Neural Network (DNN), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) including Long Short Term Memory (LSTM) and Gated Recurrent Units (GRU), Auto-Encoder (AE), Deep Belief Network (DBN), Generative Adversarial Network (GAN), and Deep Reinforcement Learning (DRL). In addition, we have included recent development of proposed advanced variant DL techniques based on the mentioned DL approaches. Furthermore, DL approaches have explored and evaluated in different application domains are also included in this survey. We have also comprised recently developed frameworks, SDKs, and benchmark datasets that are used for implementing and evaluating deep learning approaches. There are some surveys have published on Deep Learning in Neural Networks [1, 38] and a survey on RL [234]. However, those papers have not discussed the individual advanced techniques for training large scale deep learning models and the recently developed method of generative models [1].

          Towards Knowledge Discovery from the Vatican Secret Archives. In Codice Ratio -- Episode 1: Machine Transcription of the Manuscripts. (arXiv:1803.03200v3 [cs.DL] UPDATED)      Cache   Translate Page      

Authors: Donatella Firmani, Marco Maiorino, Paolo Merialdo, Elena Nieddu

In Codice Ratio is a research project to study tools and techniques for analyzing the contents of historical documents conserved in the Vatican Secret Archives (VSA). In this paper, we present our efforts to develop a system to support the transcription of medieval manuscripts. The goal is to provide paleographers with a tool to reduce their efforts in transcribing large volumes, as those stored in the VSA, producing good transcriptions for significant portions of the manuscripts. We propose an original approach based on character segmentation. Our solution is able to deal with the dirty segmentation that inevitably occurs in handwritten documents. We use a convolutional neural network to recognize characters and language models to compose word transcriptions. Our approach requires minimal training efforts, making the transcription process more scalable as the production of training sets requires a few pages and can be easily crowdsourced. We have conducted experiments on manuscripts from the Vatican Registers, an unreleased corpus containing the correspondence of the popes. With training data produced by 120 high school students, our system has been able to produce good transcriptions that can be used by paleographers as a solid basis to speedup the transcription process at a large scale.

          Automatic segmentation of the spinal cord and intramedullary multiple sclerosis lesions with convolutional neural networks. (arXiv:1805.06349v2 [cs.CV] UPDATED)      Cache   Translate Page      

Authors: Charley Gros, Benjamin De Leener, Atef Badji, Josefina Maranzano, Dominique Eden, Sara M. Dupont, Jason Talbott, Ren Zhuoquiong, Yaou Liu, Tobias Granberg, Russell Ouellette, Yasuhiko Tachibana, Masaaki Hori, Kouhei Kamiya, Lydia Chougar, Leszek Stawiarz, Jan Hillert, Elise Bannier, Anne Kerbrat, Gilles Edan, Pierre Labauge, Virginie Callot, Jean Pelletier, Bertrand Audoin, Henitsoa Rasoanandrianina, Jean-Christophe Brisset, Paola Valsasina, Maria A. Rocca, Massimo Filippi, Rohit Bakshi, Shahamat Tauhid, Ferran Prados, Marios Yiannakas, Hugh Kearney, Olga Ciccarelli, Seth Smith, Constantina Andrada Treaba, Caterina Mainero, Jennifer Lefeuvre, Daniel S. Reich, Govind Nair, Vincent Auclair, Donald G. McLaren, Allan R. Martin, Michael G. Fehlings, Shahabeddin Vahdat, Ali Khatibi, Julien Doyon, et al. (4 additional authors not shown)

The spinal cord is frequently affected by atrophy and/or lesions in multiple sclerosis (MS) patients. Segmentation of the spinal cord and lesions from MRI data provides measures of damage, which are key criteria for the diagnosis, prognosis, and longitudinal monitoring in MS. Automating this operation eliminates inter-rater variability and increases the efficiency of large-throughput analysis pipelines. Robust and reliable segmentation across multi-site spinal cord data is challenging because of the large variability related to acquisition parameters and image artifacts. The goal of this study was to develop a fully-automatic framework, robust to variability in both image parameters and clinical condition, for segmentation of the spinal cord and intramedullary MS lesions from conventional MRI data. Scans of 1,042 subjects (459 healthy controls, 471 MS patients, and 112 with other spinal pathologies) were included in this multi-site study (n=30). Data spanned three contrasts (T1-, T2-, and T2*-weighted) for a total of 1,943 volumes. The proposed cord and lesion automatic segmentation approach is based on a sequence of two Convolutional Neural Networks (CNNs). To deal with the very small proportion of spinal cord and/or lesion voxels compared to the rest of the volume, a first CNN with 2D dilated convolutions detects the spinal cord centerline, followed by a second CNN with 3D convolutions that segments the spinal cord and/or lesions. When compared against manual segmentation, our CNN-based approach showed a median Dice of 95% vs. 88% for PropSeg, a state-of-the-art spinal cord segmentation method. Regarding lesion segmentation on MS data, our framework provided a Dice of 60%, a relative volume difference of -15%, and a lesion-wise detection sensitivity and precision of 83% and 77%, respectively. The proposed framework is open-source and readily available in the Spinal Cord Toolbox.

          Processing of missing data by neural networks. (arXiv:1805.07405v2 [cs.LG] UPDATED)      Cache   Translate Page      

Authors: Marek Smieja, Łukasz Struski, Jacek Tabor, Bartosz Zieliński, Przemysław Spurek

We propose a general, theoretically justified mechanism for processing missing data by neural networks. Our idea is to replace typical neuron response in the first hidden layer by its expected value. This approach can be applied for various types of networks at minimal cost in their modification. Moreover, in contrast to recent approaches, it does not require complete data for training. Experimental results performed on different types of architectures show that our method gives better results than typical imputation strategies and other methods dedicated for incomplete data.

          Pseudo-Feature Generation for Imbalanced Data Analysis in Deep Learning. (arXiv:1807.06538v3 [cs.LG] UPDATED)      Cache   Translate Page      

Authors: Tomohiko Konno, Michiaki Iwazume

We generate pseudo-features by multivariate probability distributions obtained from feature maps in a low layer of trained deep neural networks. Then, we virtually augment the data of minor classes by the pseudo-features in order to overcome imbalanced data problems. Because all the wild data are imbalanced, the proposed method has the possibility to improve the ability of DNN in a broad range of problems

          Comment on TensorFlow JS Tutorial – Build a neural network with TensorFlow for Beginners by Infoundation Organisation      Cache   Translate Page      
Any help in Text Classification in tensorflow.js
          Neural network predictions know when companies are being mentioned      Cache   Translate Page      
          Comment on First Wave of Spiking Neural Network Hardware Hits by Richard French      Cache   Translate Page      
For me at least, the important thing here is the advancement of special-purpose (and event-driven, hence power efficient) neuromorphic hardware. Sure, we have powerful GPU-inspired architectures and they certainly have their place in high-performance general-purpose computing, but now we're seeing potential products being developed inspired by real nervous systems (and likely researched using simulations on GPU-based high-performance system) and will hopefully serve as excellent widely available experimental platforms. This in my mind will accelerate the research and development of a low-power requirement (and hence mobile) intelligent ability for machines. Don't forget that we're still discovering much about single neuron function, let alone the entire anterior nervous system itself, and all of this will inform the function of our hardware implementation. Fascinating times ahead!
          NVIDIA Shares General Performance Graphs for Turing GPUs      Cache   Translate Page      

Next week marks the launch of NVIDIA's Turing-based GeForce RTX 20 series graphics cards, but while it is only a week away, there is still a lot of speculation about how these cards will perform. NVIDIA was keen to show off the architecture's ray tracing performance when they were announced, but there was little information about performance for traditional rendering methods, or the information was clouded by the use of DLSS, Deep Learning Super Sampling. From what I have seen, DLSS works by using a pre-trained neural network to better upscale images for a game, so it can actually be rendered at a lower resolution, significantly improving performance. Anyway, now NVIDIA has shown off another graph with some generalized performance information on it at GTC Japan 2018.

According to one graph, the RTX 2080 is capable of 4K at 60 FPS, without DLSS, and is faster than the GTX 1080 Ti. With DLSS, the performance is even greater. Just how much greater though is unknown because, like many marketing graphs, there is no y-scale, or really even much of a y-axis (the only mark is for 4K 60 FPS). While WCCFtech does have the graphs, there is no information on what games or benchmarks were used to arrive at these performance values, or what settings were used.

According to VideoCardz last week, the embargo on RTX 2080 and RTX 2080 Ti reviews ends on September 19, so we should learn more then.

Source: WCCFtech

          Learning to Segment 3D Linear Structures Using Only 2D Annotations      Cache   Translate Page      
We propose a loss function for training a Deep Neural Network (DNN) to segment volumetric data, that accommodates ground truth annotations of 2D projections of the training volumes, instead of annotations of the 3D volumes themselves. In consequence, we significantly decrease the amount of annotations needed for a given training set. We apply the proposed loss to train DNNs for segmentation of vascular and neural networks in microscopy images and demonstrate only a marginal accuracy loss associated to the significant reduction of the annotation effort. The lower labor cost of deploying DNNs, brought in by our method, can contribute to a wide adoption of these techniques for analysis of 3D images of linear structures.

Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10
Site Map 2018_08_11
Site Map 2018_08_12
Site Map 2018_08_13
Site Map 2018_08_15
Site Map 2018_08_16
Site Map 2018_08_17
Site Map 2018_08_18
Site Map 2018_08_19
Site Map 2018_08_20
Site Map 2018_08_21
Site Map 2018_08_22
Site Map 2018_08_23
Site Map 2018_08_24
Site Map 2018_08_25
Site Map 2018_08_26
Site Map 2018_08_27
Site Map 2018_08_28
Site Map 2018_08_29
Site Map 2018_08_30
Site Map 2018_08_31
Site Map 2018_09_01
Site Map 2018_09_02
Site Map 2018_09_03
Site Map 2018_09_04
Site Map 2018_09_05
Site Map 2018_09_06
Site Map 2018_09_07
Site Map 2018_09_08
Site Map 2018_09_09
Site Map 2018_09_10
Site Map 2018_09_11
Site Map 2018_09_12
Site Map 2018_09_13