Next Page: 10000

          Diagnosing Heart Diseases with Deep Neural Networks      Cache   Translate Page      

The Second National Data Science Bowl, a data science competition where the goal was to automatically determine cardiac volumes from MRI scans, has just ended. We participated with a team of 4 members from Ghent University and finished 2nd!

The team kunsthart (artificial heart in English) consisted of Ira Korshunova, Jeroen Burms, Jonas Degrave, 3 PhD students, and professor Joni Dambre. It’s also a follow-up of last year’s team ≋ Deep Sea ≋, which finished in first place for the First National Data Science Bowl.


This blog post is going to be long, here is a clickable overview of different sections.


The problem

The goal of this year’s Data Science Bowl was to estimate minimum (end-systolic) and maximum (end-diastolic) volumes of the left ventricle from a set of MRI-images taken over one heartbeat. These volumes are used by practitioners to compute an ejection fraction: fraction of outbound blood pumped from the heart with each heartbeat. This measurement can predict a wide range of cardiac problems. For a skilled cardiologist analysis of MRI scans can take up to 20 minutes, therefore, making this process automatic is obviously useful.

Unlike the previous Data Science Bowl, which had very clean and voluminous data set, this year’s competition required a lot more focus on dealing with inconsistencies in the way the very limited number of data points were gathered. As a result, most of our efforts went to trying out different ways to preprocess and combine the different data sources.

The data

The dataset consisted of over a thousand patients. For each patient, we were given a number of 30-frame MRI videos in the DICOM format, showing the heart during a single cardiac cycle (i.e. a single heartbeat). These videos were taken in different planes including the multiple short-axis views (SAX), a 2-chamber view (2Ch), and a 4-chamber view (4Ch). The SAX views, whose planes are perpendicular to the long axis of the left ventricle, form a series of slices that (ideally) cover the entire heart. The number of SAX slices ranged from 1 to 23. Typically, the region of interest (ROI) is only a small part of the entire image. Below you can find a few of SAX slices and Ch2, Ch4 views from one of the patients. Red circles on the SAX images indicate the ROI’s center (later we will explain how to find it), for Ch2 and Ch4 they specify the location of SAX slices projected on the corresponding view.

sax_5 sax_9 sax_10 sax_11 sax_12 sax_15
2Ch 4Ch

The DICOM files also contained a bunch of metadata. Some of the metadata fields, like PixelSpacing and ImageOrientationm were absolutely invaluable to us. The metadata also specified patient’s age and sex.

For each patient in the train set, two labels were provided: the systolic volume and the diastolic volume. From what we gathered (link), these were obtained by cardiologists by manually performing a segmentation on the SAX slices, and feeding these segmentations to a program that computes the minimal and maximal heart chamber volumes. The cardiologists didn’t use the 2Ch or 4Ch images to estimate the volumes, but for us they proved to be very useful.

Combining these multiple data sources can be difficult, however for us dealing with inconsistencies in the data was more challenging. Some examples: the 4Ch slice not being provided for some patients, one patient with less than 30 frames per MRI video, couple of patients with only a handful of SAX slices, patients with SAX slices taken in weird locations and orientations.

The evaluation

Given a patient’s data, we were asked to output a cumulative distribution function over the volume, ranging from 0 to 599 mL, for both systole and diastole. The models were scored by a Continuous Ranked Probability Score (CRPS) error metric, which computes the average squared distance between the predicted CDF and a Heaviside step function representing the real volume.

An additional interesting novelty of this competition was the two stage process. In the first stage, we were given a training set of 500 patients with a public test set of 200 patients. In the final week we were required to submit our model and afterwards the organizers released the test data of 440 patients and labels for 200 patients from the public test set. We think the goal was to compensate for the small dataset and prevent people from optimizing against the test set through visual inspection of every part of their algorithm. Hand-labeling in the first stage was allowed on the training dataset only, for the second stage it was also allowed for 200 validation patients.

The solution: traditional image processing, convnets, and dealing with outliers

In our solution, we combined traditional image processing approaches, which find the region of interest (ROI) in each slice, with convolutional neural networks, which perform the mapping from the extracted image patches to the predicted volumes. Given the very limited number of training samples, we tried combat overfitting by restricting our models to combine the different data sources in predefined ways, as opposed to having them learn how to do the aggregation. Unlike many other contestants, we performed no hand-labelling .

Pre-processing and data augmentation

The provided images have varying sizes and resolutions, and do not only show the heart, but the entire torso of the patient. Our preprocessing pipeline made the images ready to be fed to a convolutional network by going through the following steps:

  • applying a zoom factor such that all images have the same resolution in millimeters
  • finding the region of interest and extracting a patch centered around it
  • data augmentation
  • contrast normalization

To find the correct zooming factor, we made use of the PixelSpacing metadata field, which specifies the image resolution. Further we will explain our approach to ROI detection and data augmentation.

Detecting the Region Of Interest through image segmentation techniques

We used classical computer vision techniques to find the left ventricle in the SAX slices. For each patient, the center and width of the ROI were determined by combining the information of all the SAX slices provided. The figure below shows an example of the result.

ROI extraction steps
ROI extraction steps

First, as was suggested in the Fourier based tutorial, we exploit the fact that each slice sequence captures one heartbeat and use Fourier analyses to extract an image that captures the maximal activity at the corresponding heartbeat frequency (same figure, second image).

From these Fourier images, we then extracted the center of the left ventricle by combining the Hough circle transform with a custom kernel-based majority voting approach across all SAX slices. First, for each fourier image (resulting from a single sax slice), the highest scoring Hough circles for a range of radii were found, and from all of those, the highest scoring ones were retained. , and the range of radii are metaparameters that severely affect the robustness of the ROI detected and were optimised manually. The third image in the figure shows an example of the best circles for one slice.

Finally, a ‘likelihood surface’ (rightmost image in figure above) was obtained by combining the centers and scores of the selected circles for all slices. Each circle center was used as the center for a Gaussian kernel, which was scaled with the circle score, and all these kernels were added. The maximum across this surface was selected as the center of the ROI. The width and height of the bounding box of all circles with centers within a maximal distance (another hyperparameter) of the ROI center were used as bounds for the ROI or to create an ellipsoidal mask as shown in the figure.

Given these ROIs in the SAX slices, we were able to find the ROIs in the 2Ch and 4Ch slices by projecting the SAX ROI centers onto the 2Ch and 4Ch planes.

Data augmentation

As always when using convnets on a problem with few training examples, we used tons of data augmentation. Some special precautions were needed, since we had to preserve the surface area. In terms of affine transformations, this means that only skewing, rotation and translation was allowed. We also added zooming, but we had to correct our volume labels when doing so! This helped to make the distirbution of labels more diverse.

Another augmentation here came in the form of shifting the images over the time axis. While systole was often found in the beginning of a sequence, this was not always the case. Augmenting this, by rolling the image tensor over the time axis, made the resulting model more robust against this noise in the dataset, while providing even more augmentation of our data.

Data augmentation was applied during the training phase to increase the number of training examples. We also applied the augmentations during the testing phase, and averaged predictions across the augmented versions of the same data sample.

Network architectures

We used convolutional neural networks to learn a mapping from the extracted image patches to systolic and diastolic volumes. During the competition, we played around a lot with both minor and major architectural changes. Our base architecture for most of our models was based on VGG-16.

As we already mentioned, we trained different models which can deal with different kinds of patients. There are roughly four different kinds of models we trained: single slice models, patient models, 2Ch models and 4Ch models.

Single slice models

Single slice models are models that take a single SAX slice as an input, and try to predict the systolic and diastolic volumes directly from it. The 30 frames were fed to the network as 30 different input channels. The systolic and diastolic networks shared the convolutional layers, but the dense layers were separated. The output of the network could be either a 600-way softmax (followed by a cumulative sum), or the mean and standard deviation of a Gaussian (followed by a layer computing the cdf of the Gaussian).

Although these models obviously have too little information to make a decent volume estimation, they benefitted hugely from test-time augmentation (TTA). During TTA, the model gets slices with different augmentations, and the outputs are averaged across augmenations and slices for each patient. Although this way of aggregating over SAX slices is suboptimal, it proved to be very robust to the relative positioning of the SAX slices, and is as such applicable to all patients.

Our single best single slice model achieved a local validation score of 0.0157 (after TTA), which was a reliable estimate for the public leaderboard score for these models. The approximate architecture of the slice models is shown on the following figure.

2Ch and 4Ch models

These models have a much more global view on the left ventricle of the heart than single SAX slice models. The 2Ch models also have the advantage of being applicable to every patient. Not every patient had a 4Ch slice. We used the same VGG-inspired architecture for these models. Individually, they achieved a similar validation score (0.0156) as was achieved by averaging over multiple sax slices. By ensembling only single slice, 2Ch and 4Ch models, we were able to achieve a score of 0.0131 on the public leaderboard.

Patient models

As opposed to single slice models, patient models try to make predictions based on the entire stack of (up to 25) SAX slices. In our first approaches to these models, we tried to process each slice separately using a VGG-like single slice network, followed by feeding the results to an overarching RNN in an ordered fashion. However, these models tended to overfit badly. Our solution to this problem consists of a clever way to merge predictions from multiple slices. Instead of having the network learn how to compute the volume based on the results of the individual slices, we designed a layer which combines the areas of consecutive cross-sections of the heart using a truncated cone approximation.

Basically, the slice models have to estimate the area (and standard deviation thereof) of the cross-section of the heart in a given slice . For each pair of consecutive slices and , we estimate the volume of the heart between them as , where is the distance between the slices. The total volume is then given by .

Ordering the SAX slices and finding the distance between them was achieved through looking at the SliceLocation metadata fields, but this field was not very reliable in finding the distance between slices, neither was the SliceThickness. We looked for the two slices that were furthest apart, drew a line between them, and projected every other slice onto this line. This way, we estimated the distance between two slices ourselves.

Our best single model achieved a local validation score of 0.0105 using this approach. This was no longer a good leaderboard estimation, since our local validation set contained relatively few outliers compared to the public leaderboard in the first round. The model had the following architecture:

Layer Type Size Output shape
Input layer   (8, 25, 30, 64, 64)*
Convolution 128 filters of 3x3 (8, 25, 128, 64, 64)
Convolution 128 filters of 3x3 (8, 25, 128, 64, 64)
Max pooling   (8, 25, 128, 32, 32)
Convolution 128 filters of 3x3 (8, 25, 128, 32, 32)
Convolution 128 filters of 3x3 (8, 25, 128, 32, 32)
Max pooling   (8, 25, 128, 16, 16)
Convolution 256 filters of 3x3 (8, 25, 256, 16, 16)
Convolution 256 filters of 3x3 (8, 25, 256, 16, 16)
Convolution 256 filters of 3x3 (8, 25, 256, 16, 16)
Max pooling   (8, 25, 256, 8, 8)
Convolution 512 filters of 3x3 (8, 25, 512, 8, 8)
Convolution 512 filters of 3x3 (8, 25, 512, 8, 8)
Convolution 512 filters of 3x3 (8, 25, 512, 8, 8)
Max pooling   (8, 25, 512, 4, 4)
Convolution 512 filters of 3x3 (8, 25, 512, 4, 4)
Convolution 512 filters of 3x3 (8, 25, 512, 4, 4)
Convolution 512 filters of 3x3 (8, 25, 512, 4, 4)
Max pooling   (8, 25, 512, 2, 2)
Fully connected (S/D) 1024 units (8, 25, 1024)
Fully connected (S/D) 1024 units (8, 25, 1024)
Fully connected (S/D) 2 units (mu and sigma) (8, 25, 2)
Volume estimation (S/D)   (8, 2)
Gaussian CDF (S/D)   (8, 600)

* The first dimension is the batch size, i.e. the number of patients, the second dimension is the number of slices. If a patient had fewer slices, we padded the input and omitted the extra slices in the volume estimation.

Oftentimes, we did not train patient models from scratch. We found that initializing patient models with single slice models helps against overfitting, and severely reduces training time of the patient model.

The architecture we described above was one of the best for us. To diversify our models, some of the good things we tried include:

  • processing each frame separately, and taking the minimum and maximum at some point in the network to compute systole and diastole
  • sharing some of the dense layers between the systole and diastole networks as well
  • using discs to approximate the volume, instead of truncated cones
  • cyclic rolling layers
  • leaky RELUs
  • maxout units

One downside of the patient model approach was that these models assume that SAX slices nicely range from one end of the heart to the other. This was trivially not true for patients with very few (< 5) slices, but it was harder to detect automatically for some other outlier cases as in figure below, where something is wrong with the images or the ROI algorithm fails.

sax_12 sax_15 sax_17 sax_36 sax_37 sax_41
2Ch 4Ch

Training and ensembling

Error function. At the start of the competition, we experimented with various error functions, but we found optimising CRPS directly to work best.

Training algorithm. To train the parameters of our models, we used the Adam update rule (Kingma and Ba).

Initialization. We initialised all filters and dense layers orthogonally (Saxe et al.). Biases were initialized to small positive values to have more gradients at the lower layer in the beginning of the optimization. At the Gaussian output layers, we initialized the biases for mu and sigma such that initial predictions of the untrained network would fall in a sensible range.

Regularization. Since we had a low number of patients, we needed considerable regularization to prevent our models from overfitting. Our main approach was to augment the data and to add a considerable amount of dropout.


Since the trainset was already quite small, we kept the validation set small as well (83 patients). Despite this, our validation score remained pretty close to the leaderboard score. Also, in cases where it didn’t, it helped us identify issues in our models, namely problematic cases in the test set which were not represented in our validation set. We noticed for instance that quite some of our patient models had problems with patients with too few SAX slices (< 5).

Selectively train and predict

By looking more closely at the validation scores, we observed that most of the accumulated error was obtained by wrongly predicting only a couple of such outlier cases. At some point, being able to handle only a handful of these meant the difference between a leaderboard score of 0.0148 and 0.0132!

To mitigate such issues, we set up our framework such that each individual model could choose not to train on or predict a certain patient. For instance, models on patients’ SAX slices could choose not to predict patients with too few SAX slices, models which use the 4Ch slice would not predict for patients who don’t have this slice. We extended this idea further by developing expert models, which only trained and predicted for patients with either a small or a big heart (as determined by the ROI detection step). Further down the pipeline, our ensembling scripts would then take these non-predictions into account.

Ensembling and dealing with outliers

We ended up creating about 250 models throughout the competition. However, we knew that some of these models were not very robust to certain outliers or patients whose ROI we could not accurately detect. We came up with two different ensembling strategies that would deal with these kind of issues.

Our first ensembling technique followed the following steps:

  1. For each patient, we select the best way to average over the test time augmentations. Slice models often preferred a geometric averaging of distributions, whereas in general arithmetic averaging worked better for patient models.
  2. We average over the models by calculating each prediction’s KL-divergence from the average distribution, and the cross entropy of each single sample of the distribution. This means that models which are further away from the average distribution get more weight (since they are more certain). It also means samples of the distribution closer to the median-value of 0.5 get more weight. Each model also receives a model-specific weight, which is determined by optimizing these weights over the validation set.
  3. Since not all models predict all patients, it is possible for a model in the ensemble to not predict a certain patient. In this case, a new ensemble without these models is optimized, especially for this single patient. The method to do this is described in step 2.
  4. This ensemble is then used on every patient on the test-set. However, when a certain model’s average prediction disagrees too much with the average prediction of all models, the model is thrown out of the ensemble, and a new ensemble is optimized for this patient, as described in step 2. This meant that about ~75% of all patients received a new, ‘personalized’ ensemble.

Our second way of ensembling involves comparing an ensemble that is suboptimal, but robust to outliers, to an ensemble that is not robust to them. This approach is especially interesting, since it does not need a validation set to predict the test patients. It follows the following steps:

  1. Again, for each patient, we select the best way to average over the test time augmentations again.
  2. We combine the models by using a weighted average on the predictions, with the weights summing to one. These weights are determined by optimising them on the validation set. In case not all models provide a prediction for a certain patient, it is dropped for that patient and the weights of the other models are rescaled such that they again sum to one. This ensemble is not robust to outliers, since it contains patient models.
  3. We combine all 2Ch, 4Ch and slice models in a similar fashion. This ensemble is robust to outliers, but only contains less accurate models.
  4. We detect outliers by finding the patients where the two ensembles disagree the most. We measure disagreement using CRPS. If the CRPS exceeds a certain threshold for a patient, we assume it to be an outlier. We chose this threshold to be 0.02.
  5. We retrain the weights for the first ensemble, but omit the outliers from the validation set. We choose this ensemble to generate predictions for most of the patients, but choose the robust ensemble for the outliers.

Following this approach, we detected three outliers in the test set during phase one of the competition. Closer inspection revealed that for all of them either our ROI detection failed, or the SAX slices were not nicely distributed across the heart. Both ways of ensembling achieved similar scores on the public leaderboard. (0.0110)

Second round submissions

For the second round of the competition, we were allowed to retrain our models on the new labels (+ 200 patients). We were also allowed to plan two submissions. Of course, it was impossible to retrain all of our models during this single week. For this reason, we chose to only train our 44 best models, according to our ensembling scripts.

For our first submission, we splitted of a new validation set. The resulting models were combined using our first ensembling strategy.

For our second submission, we trained our models on the entire training set (i.e. there was no validation split). We assembled them using the second ensembling method. Since we had no validation set to optimise the weights of the ensemble, we computed the weights by training an ensemble on the models we trained with a validation split, and transferred them over.

Software and hardware

We used Lasagne, Python, Numpy and Theano to implement our solution, in combination with the cuDNN library. We also used PyCUDA for a few custom kernels. We made use of scikit-image for pre-processing and augmentation.

We trained our models on the NVIDIA GPUs that we have in the lab, which include GTX TITAN X, GTX 980, GTX 680 and Tesla K40 cards. We would like to thank Frederick Godin and Elias Vansteenkiste for lending us a few extra GPUs in the last week of the competition.


In this competition, we tried out different ways to preprocess data and combine information from different data sources, and thus, we learned a lot in this aspect. However, we feel that there is still a room for improvement. For example, we observed that most of our error still hails from a select group of patients. These include the ones for which our ROI extraction fails. In hindsight, hand-labeling the training data and training a network to do the ROI extraction would be a better approach, but we wanted to sidestep doing a lot of this kind of manual effort as much as possible. In the end, labeling the data would probably have been less time intensive.

UPDATE (March 23): the code is now available on GitHub:

          Support for a LoRaWAN Subsystem      Cache   Translate Page      

Sometimes kernel developers find themselves competing with each other to get their version of a particular feature into the kernel. But sometimes developers discover they've been working along very similar lines, and the only reason they hadn't been working together was that they just didn't know each other existed.

Recently, Jian-Hong Pan asked if there was any interest in a LoRaWAN subsystem he'd been working on. LoRaWAN is a commercial networking protocol implementing a low-power wide-area network (LPWAN) allowing relatively slow communications between things, generally phone sensors and other internet of things devices. Jian-Hong posted a link to the work he'd done so far:

He specifically wanted to know "should we add the definitions into corresponding kernel header files now, if LoRaWAN will be accepted as a subsystem in Linux?" The reason he was asking was that each definition had its own number. Adding them into the kernel would mean the numbers associated with any future LoRaWAN subsystem would stay the same during development.

However, Marcel Holtmann explained the process:

When you submit your LoRaWAN subsystem to netdev for review, include a patch that adds these new address family definitions. Just pick the next one available. There will be no pre-allocation of numbers until your work has been accepted upstream. Meaning, that the number might change if other address families get merged before yours. So you have to keep updating. glibc will eventually follow the number assigned by the kernel.

Meanwhile, Andreas Färber said he'd been working on supporting the same protocol himself and gave a link to his own proof-of-concept repository:

On learning about Andreas' work, Jian-Hong's response was, "Wow! Great! I get new friends :)"

That's where the public conversation ended. The two of them undoubtedly have pooled their energies and will produce a new patch, better than either of them might have done separately.

          Signatures for finite-dimensional representations of real reductive Lie groups. (arXiv:1809.03533v1 [math.RT])      Cache   Translate Page      

Authors: Daniil Kalinov, David A. Vogan, Jr., Christopher Xu

We present a closed formula, analogous to the Weyl dimension formula, for the signature of an invariant Hermitian form on any finite-dimensional irreducible representation of a real reductive Lie group, assuming that such a form exists. The formula shows in a precise sense that the form must be very indefinite. For example, if an irreducible representation of $GL(n,R)$ admits an invariant form of signature $(p,q)$, then we show that $(p-q)^2 \le p+q$. The proof is an application of Kostant's computation of the kernel of the Dirac operator.

          The reproducing kernel Hilbert space approach in nonparametric regression problems with correlated observations. (arXiv:1809.03754v1 [math.ST])      Cache   Translate Page      

Authors: Djihad Benelmadani, Karim Benhenni, Sana Louhichi

In this paper we investigate the problem of estimating the regression function in models with correlated observations. The data is obtained from several experimental units each of them forms a time series. We propose a new estimator based on the in- verse of the autocovariance matrix of the observations, assumed known and invertible. Using the properties of the Reproducing Kernel Hilbert spaces, we give the asymptotic expressions of its bias and its variance. In addition, we give a theoretical comparison, by calculating the IMSE, between this new estimator and the classical one proposed by Gasser and Muller. Finally, we conduct a simulation study to investigate the performance of the proposed estimator and to compare it to the Gasser and Muller's estimator in a finite sample set.

          On the approximation of L\'evy driven Volterra processes and their integrals. (arXiv:1809.04011v1 [math.OC])      Cache   Translate Page      

Authors: Giulia di Nunno, Andrea Fiacco, Erik Hove Karlsen

Volterra processes appear in several applications ranging from turbulence to energy finance where they are used in the modelling of e.g. temperatures and wind and the related financial derivatives. Volterra processes are in general non-semimartingales and a theory of integration with respect to such processes is in fact not standard. In this work we suggest to construct an approximating sequence of L\'evy driven Volterra processes, by perturbation of the kernel function. In this way, one can obtain an approximating sequence of semimartingales.

Then we consider fractional integration with respect to Volterra processes as integrators and we study the corresponding approximations of the fractional integrals. We illustrate the approach presenting the specific study of the Gamma-Volterra processes. Examples and illustrations via simulation are given.

          Kac-Ward formula and its extension to order-disorder correlators through a graph zeta function. (arXiv:1709.06052v3 [math-ph] UPDATED)      Cache   Translate Page      

Authors: Michael Aizenman, Simone Warzel

A streamlined derivation of the Kac-Ward formula for the planar Ising model's partition function is presented and applied in relating the kernel of the Kac-Ward matrices' inverse with the correlation functions of the Ising model's order-disorder correlation functions. A shortcut for both is facilitated by the Bowen-Lanford graph zeta function relation. The Kac-Ward relation is also extended here to produce a family of non planar interactions on $\mathbb{Z}^2$ for which the partition function and the correlation function based on the order-disorder correlators are solvable at special values of the coupling parameters / temperature.

          Small-time asymptotics for subelliptic Hermite functions on $SU(2)$ and the CR sphere. (arXiv:1710.02550v2 [math.AP] UPDATED)      Cache   Translate Page      

Authors: Joshua Campbell, Tai Melcher

We show that, under a natural scaling, the small-time behavior of the logarithmic derivatives of the subelliptic heat kernel on $SU(2)$ converges to their analogues on the Heisenberg group at time 1. Realizing $SU(2)$ as $\mathbb{S}^3$, we then generalize these results to higher-order odd-dimensional spheres equipped with their natural subRiemannian structure, where the limiting spaces are now the higher-dimensional Heisenberg groups.

          [Free] 2018(Aug) Ensurepass VMware 2V0-622 Dumps with VCE and PDF 61-70      Cache   Translate Page : Ensure you pass the IT Exams 2018 Aug VMware Official New Released 2V0-622100% Free Download! 100% Pass Guaranteed! VMware Certified Professional 6.5 – Data Center Virtualization Question No: 61 A vSphere Administrator attempts to enable Fault Tolerance for a virtual machine but receives the following error: Secondary VM could not be powered on as there are no compatible hosts that can accommodate it. What two options could cause this error? (Choose two.) The other ESXi host(s) are in Maintenance Mode. Hardware virtualization is not enabled on the other ESXi host(s). The other ESXi host(s) are in Quarantine Mode. Hardware MMU is enabled on the other ESXi host(s). Answer: A,B Question No: 62 A security officer has issued a new directive that users will no longer have access to change connected network adapters to limit denial of service on a virtual machine. Which two correct virtual machine advanced configuration parameters will accomplish this? (Choose two.) isolation.device.edit.disable = 鈥淔ALSE鈥?/p> isolation.device.edit.disable = 鈥淭RUE鈥?/p> isolation.device.connectable.disable = 鈥淔ALSE鈥?/p> isolation.device.connectable.disable = 鈥淭RUE鈥?/p> Answer: B,D Question No: 63 In which scenario will vSphere DRS balance resources when set to Fully Automated? Hosts with only shared storage Hosts are part of a vSphere HA cluster Power Management is set to Balanced on hosts Hosts with shared or non-shared storage Answer: A Explanation: the default automation level is set to Fully Automated. This means that DRS will automatically migrate VMs across hosts whenever it deems it necessary. For this, it needs hosts with shared storage only. DRS balance resources in Fully Automated mode when there are hosts that require space in shared storage. Question No: 64 By default, how often is the vCenter vpx user password automatically changed? 15 days 60 days 30 days 45 days Answer: C Question No: 65 A vSphere Administrator wants to reserve 0.5 Gbps for virtual machines on each uplink on a disturbed switch that has 10 uplinks. What is the quota that should be reserved for the network resource pool? 10 Gbps 5 Gbps 100 Gbps 0.5 Gbps Answer: B Explanation: if the virtual machine system traffic has 0.5 Gbps reserved on each 10 GbE uplink on a distributed switch that has 10 uplinks, then the total aggregated bandwidth available for VM reservation on this switch is 5 Gbps. Each network resource pool can reserve a quota of this 5 Gbps capacity. Question No: 66 When upgrading a VMware vSAN cluster to version 6.5, which two tasks must be completed to comply with upgrade requirements and VMware-recommended best practices? (Choose two.) Use RVC to upgrade the hosts. Use the latest available ESXi version. Back up all VMs. Use only the 鈥淔ull data migration鈥?maintenance mode option. Answer: A,B Question No: 67 Which is required for configuring iSCSI Software Adapter network port binding? VMkernel of the iSCSI traffic must be load balanced using Route based on IP Hash algorithm. VMkernel of the iSCSI traffic must be load balanced using Route based on Source Virtual Port ID algorithm. VMkernel of the iSCSI traffic must... Read More
          North America & Europe Palm Derivatives Market Research, Size, Share, Trend, Growth, Top Key Players and Forecast 2018 to 2028      Cache   Translate Page      
(EMAILWIRE.COM, September 13, 2018 ) Market Overview: Derivatives and fractions of palm oil and palm kernel oil have widespread uses in various industries like food manufacture, in personal and healthcare products and industrial products. Furthermore, palm oil and some of its derivatives can be...
          “隐匿者”病毒团伙技术升级传播病毒,暴力入侵电脑威胁全网用户      Cache   Translate Page      
一、概述 近期,火绒安全团队发现病毒团伙”隐匿者”进行了新的技术升级,正在传播病毒”Voluminer”。该病毒通过暴力破解的方式入侵电脑后,会利用用户电脑挖取门罗币,并且在电脑中留下后门,病毒团伙可通过远程控制随时修改恶意代码,下载其他更具威胁性的病毒模块。该病毒还会通过内核级对抗手段躲避安全软件查杀。 病毒暴力破解用户数据库入侵电脑后,会篡改电脑系统中的主引导记录(MBR),一旦重启电脑,即可执行病毒,并在系统内核空间运行恶意代码,之后将恶意代码注入到系统进程中(winlogon或explorer进程),最终恶意代码会下载后门病毒到本地执行。  目前,后门病毒会下载执行挖矿相关病毒模块,挖取门罗币,但不排除病毒团伙将来会推送其他病毒模块,发动更具威胁性病毒攻击的可能性。  火绒安全团队曾曝光过该病毒制作组织”隐匿者”,通过对该其长期追踪,发现一直在活跃中,该团伙可能由中国人组成或参与,并完全以牟利为目的。是近年来互联网上最活跃、发起攻击次数最多、攻击范围最广的黑客团伙之一。   与此前相比,”隐匿者”本次传播的病毒样本所使用的技术更深入底层,隐蔽性更强,也更不易被用户察觉。使用内核级手段对自身病毒代码在磁盘中进行自我保护,与安全软件对抗,难以清除。并且加入远程控制功能,可以随时下载其他病毒模块。  相关文章: 《彻底曝光黑客”隐匿者” 目前作恶最多的网络攻击团伙》 二、病毒来源 通过对 “隐匿者”黑客组织的长期追踪,我们发现近期大范围传播的病毒家族Bootkit/Voluminer与该黑客组织可能存在直接关系。病毒运行后会篡改磁盘MBR代码,在电脑重启执行病毒MBR代码后,会在系统内核空间运行恶意代码,之后将恶意代码注入winlogon或explorer进程(依据操作系统版本),最终恶意代码会下载后门病毒到本地执行。后门病毒现阶段会下载执行挖矿相关病毒模块挖取门罗币,但我们不排除将来会推送其他病毒模块的可能性。  “隐匿者”通常会通过暴力破解连接用户计算机中的RPC服务、数据库服务器等,通过这些方式入侵用户电脑进而执行其他恶意代码,具体攻击方式与火绒在2017年7月发布的《彻底曝光黑客”隐匿者” 目前作恶最多的网络攻击团伙》报告中所介绍的攻击方式完全相同。火绒所截获到与本次样本相关的攻击行为,如下图所示:  攻击行为 火绒在前期报告中,在列举病毒攻击行为时所使用的病毒行为日志原图,如下图所示:  前期报告原图 在火绒前期报告中所提到的黑客所常用的FTP服务器用户名及密码分别为test和1433,与本次所截获攻击事件中黑客所使用的FTP服务器(相关信息相同。在”隐匿者”使用的FTP服务器地址中,我们发现down.mysking.info域名所指向的FTP服务器依然可以正常访问,服务器中存放的病毒文件虽然与本次黑客使用的FTP服务器中不同,但是文件名却极其相似。FTP服务器存放文件情况对比,如下图所示:  FTP文件情况对比图 除此之外,在本次截获的部分病毒样本语言信息为简体中文,与”隐匿者”报告中相同。进而我们可以初步判断,本次攻击事件可能与”隐匿者”黑客组织存在直接关系。本次截获样本(SHA256:46527e651ae934d84355adb0a868c5edda4fd1178c5201b078dbf21612e6bc78)的语言信息,如下图所示:  病毒样本语言信息 三、样本分析 与隐匿者早期样本相比,近期在野进行传播的隐匿者样本病毒行为已经越来越复杂,所使用的攻击技术也更为底层。例如本文所提到的病毒样本就会感染MBR,并对被篡改后的MBR代码进行保护,从而提高了对该病毒进行查杀的复杂度。  Bootkit/Voluminer  Bootkit/Voluminer病毒运行后会直接写入病毒MBR代码,原始的MBR数据被病毒备份在磁盘的第二个扇区中。其余病毒代码起始位置为第三个扇区,其余病毒代码(除MBR代码外)共占用54个扇区,由于内核平台版本不同(x86/x64),报告中分析内容以病毒在Windows 7(x64)系统中的感染情况为例。被感染后的MBR代码数据,如下图所示: 被感染后的MBR代码数据 病毒MBR代码,如下图所示: 病毒MBR代码 病毒MBR代码运行后,会将第三个扇区后的恶意代码拷贝到0x8f000地址进行执行,恶意代码会在hook INT 15中断后,重新调用原始MBR执行正常的引导启动逻辑。当INT 15 中断被调用时,病毒代码会通过匹配硬编码的方式搜索BootMgr(代码进行hook,被hook后执行的恶意代码代码会最终hook Bootmgr.exe 中的Archx86TransferTo32BitApplicationAsm和Archx86TransferTo64BitApplicationAsm。Hook INT15后执行的病毒逻辑,如下图所示: Hook INT 15执行的病毒逻辑 BootMgr(startup.com部分)被hook后,在BootMgr.exe 加载时会继续执行下一步hook操作。Hook BootMgr.exe相关代码,如下图所示: Hook BootMgr.exe相关代码 BootMgr.exe被hook后,Archx86TransferTo32BitApplicationAsm和Archx86TransferTo64BitApplicationAsm函数内代码情况,如下图所示: 被hook后的函数入口 Archx86TransferTo32BitApplicationAsm和Archx86TransferTo64BitApplicationAsm函数被hook后,被调用的病毒代码会在BootMgr.exe加载Winload.exe时hook OslArchTransferToKernel,为hook ntoskrnl.exe做准备。相关代码,如下图所示: Hook OslArchTransferToKernel相关代码 被hook后的OslArchTransferToKernel函数内代码,如下图所示: 被hook后的OslArchTransferToKernel函数代码 OslArchTransferToKernel被hook后执行的恶意代码会hook ZwCreateSection,并破坏ntoskrnl.exe中PatchGuard相关逻辑。相关代码,如下图所示: Hook OslArchTransferToKernel函数后执行的恶意代码入口 首先,恶意代码会先通过函数名哈希值获取ZwCreateSection函数地址,再获取最终需要在内核态执行的恶意代码入口(malware_krnl_main_entry),然后获取hook ZwCreateSection后被调用的处理函数入口和相关信息(包括ntoskrnl基址、malware_krnl_main_entry函数入口、ZwCreateSection函数入口地址、被patch掉的原始ZwCreateSection代码内容),最后修改ZwCreateSection函数入口代码,并将ntoskrnl中PatchGuard相关代码通过修改硬编码禁用掉。相关代码,如下图所示: Hook ZwCreateSection和破坏PatchGuard的恶意代码 被hook后的ZwCreateSection函数入口代码,如下图所示: 被hook后的ZwCreateSection函数入口代码 ZwCreateSection被hook后调用的恶意代码,首先会修复ZwCreateSection被patch掉的代码内容,之后再将后续需要执行恶意代码(代码地址:0x946E6)通过MmMapIoSpace映射到内核态地址空间进行执行。相关代码,如下图所示: ZwCreateSection被hook后调用的恶意代码 上述代码被调用后,会执行内核态恶意代码malware_krnl_main_entry,该函数内代码首先会根据函数名哈希获取所需的API地址。相关代码,如下图所示: malware_krnl_main_entry代码 内核态主要恶意代码逻辑执行后,首先会创建线程通知回调,在回调中检测csrss.exe进程是否启动,在该进程启动后再继续执行后续恶意代码逻辑。相关代码,如下图所示: 线程通知回调中恶意代码逻辑 如上图所示,在检测到csrss.exe后,首先会尝试感染MBR并将存放恶意代码的扇区保护起来。通过过滤IRP的方式,在用户访问病毒引导代码所在扇区时,返回正常引导代码数据,提高病毒的隐蔽性。感染MBR相关代码,如下图所示: 感染MBR并将原始MBR数据拷贝到第二扇区 恶意MBR及相关数据保护逻辑会保护磁盘前0x3E个扇区。相关代码,如下图所示: 恶意MBR及相关数据保护相关代码 之后在内核线程(malware_behav_entry)中会根据不同的操作系统版本对winlogon.exe或explorer.exe进行APC注入。WinXP注入explorer.exe,其他操作系统注入winlogon.exe,如果是Win10系统会再次尝试hook Storport驱动对象的IRP回调。相关代码,如下图所示: APC注入相关代码 被注入的病毒代码执行恶意逻辑主要参照从C&C服务器请求到的配置文件,该文件释放到本地后路径为:%SystemRoot%\Temp\ntuser.dat。该文件被异或0×95加密过,在使用该文件时会对文件进行解密。解密后的ntuser.dat配置内容,如下图所示: ntuser.dat配置内容 如上图,配置文件总体分为两个部分:main和update。main部分中的所有ip和网址用来下载后门病毒相关配置,update部分中的ip和网址用来更新ntuser.dat配置数据,请求到的相关配置信息至今依然在持续更新。下载后门病毒配置信息cloud.txt的代码逻辑,如下图所示: 下载后门病毒配置信息 请求到的配置信息中,除后门病毒下载地址(exe键名对应数据)外,还有名为url的配置项,该功能开启后会hook CreateProcessW劫持浏览器启动参数,但现阶段该功能尚未被开启。配置信息,如下图所示: 配置信息 恶意代码会通过上图中的下载地址,将后门病毒下载到%SystemRoot%\Temp\conhost.exe目录进行执行。下载执行远程后门病毒相关逻辑,如下图所示: 下载执行后门病毒Backdoor/Voluminer  该病毒运行后,首先会释放存放有C&C服务器列表的文件(xp.dat)至C:\Program Files\Common Files目录中,之后向C&C服务器列表中的服务器地址请求xpxmr.dat文件,用于更新C&C服务器列表。请求到的xpxmr.dat文件数据使用RSA算法进行过加密,进行解密后会重新写入到xpxmr.dat文件中,该文件为明文存放。相关代码及数据,如下图所示:  更新C&C服务器列表 病毒在运行中会向C&C服务器请求获取最新病毒版本号,当检测到存在新版本时,则会通过C&C服务器下载执行最新版本的病毒程序。当后门病毒发现当前系统为64位系统时,还会向C&C服务器请求64位版本的后门病毒到本地进行执行。相关代码,如下图所示: 请求64位版本病毒 随后,病毒会使用地址列表中的C&C服务器地址下载挖矿所需的病毒组件,暂时我们发现会被病毒下载至本地病毒仅具有挖矿功能,但我们不排除其将来会下载其他病毒模块的可能性。病毒在下载文件后,会对病毒组件进行md5校验,病毒组件的md5值会参考C&C服务器中的md5.txt文件内容。相关代码,如下图所示: 获取远程恶意代码模块 在病毒组件下载完成后,病毒会将挖矿相关的模块和配置文件释放到%windir%\debug目录中,随后开始挖矿逻辑。病毒释放挖矿配置相关代码,如下图所示: 释放挖矿配置相关代码 与前段时间友商报告中所描述的该后门病毒相比,病毒的恶意行为已经有了明显变化。在早期版本中,被配发到用户本地的后门病毒会被注册为系统服务(服务名为:Windows Audio Control),并在运行后会显示控制台窗口并且在病毒运行时会在窗口中显示运行日志,最终会下载执行挖矿病毒,中毒现象十分明显。而在现阶段版本中,被下发到用户本地的后门病毒隐蔽性已经有所提高,在病毒执行过程中用户很难有所察觉。病毒挖取门罗币时使用的配置信息片段,如下图所示: […]
          LXer: Canonical Outs New Linux Kernel Live Patch for Ubuntu 18.04 LTS and 16.04 LTS      Cache   Translate Page      
Published at LXer: Canonical released a new kernel live patch for all of its LTS (Long Term Support) Ubuntu Linux releases to address various security vulnerabilities discovered by various security...
          How to access RAM device file from the kernel module of some other driver?      Cache   Translate Page      
Hi, I am writing a driver to communicate between a hardware accelerator and Linux running on a soft processor (Microblaze), both on FPGA. At some point, I want the kernel-space part of the driver...
          Porting Hyperkernel to the ARM architecture      Cache   Translate Page      

          AMD Sends Out Initial Open-Source Linux Graphics Support For "Picasso" APUs      Cache   Translate Page      
Adding to the exciting week for AMD open-source Linux graphics is that in addition to the long-awaited patch update for FreeSync/Adaptive-Sync/VRR, patches for the Linux kernel were sent out prepping the graphics upbringing for the unreleased "Picasso" APUs...
          Issues with the SFP+ ports on a D-1537 SOC (Supermicro X10SDV-7TP4F) with linux      Cache   Translate Page      

I'm going into a XS512EM and I've tried the AXM761 compatible DACs from 10Gtek, as well as a Cable Matters DAC with general support.  I'm on Fedora 28 with ixgbe 5.1.0-k and what happens is once connected the DAC is attached it very quickly cycles through link down and link up.  I'm kindof suspecting it's still a cabling issue since SFP+ implementations are generally bad (in comparison to 10GBase-T) but i'm not sure.  I've also tried a recent Ubuntu and it had the same issues.


From journalctl -xe

May 08 18:55:02 myhost kernel: ixgbe 0000:04:00.1 eno8: detected SFP+: 4

May 08 18:55:02 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Up 10 Gbps, Flow Control: RX/TX

May 08 18:55:02 myhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eno8: link becomes ready

May 08 18:55:02 myhost NetworkManager[1049]: <info>  [1525820102.6709] device (eno8): carrier: link connected

May 08 18:55:02 myhost NetworkManager[1049]: <info>  [1525820102.6733] device (eno8): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed')

May 08 18:55:02 myhost NetworkManager[1049]: <info>  [1525820102.6757] policy: auto-activating connection 'eno8'

May 08 18:55:02 myhost NetworkManager[1049]: <info>  [1525820102.6780] device (eno8): Activation: starting connection 'eno8' (adddd43a-d127-3c9e-8408-e2eeade7a7db)

May 08 18:55:02 myhost NetworkManager[1049]: <info>  [1525820102.6787] device (eno8): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')

May 08 18:55:02 myhost NetworkManager[1049]: <info>  [1525820102.6797] device (eno8): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')

May 08 18:55:02 myhost NetworkManager[1049]: <info>  [1525820102.6806] device (eno8): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')

May 08 18:55:02 myhost NetworkManager[1049]: <info>  [1525820102.6811] dhcp4 (eno8): activation: beginning transaction (timeout in 45 seconds)

May 08 18:55:02 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Down

May 08 18:55:02 myhost NetworkManager[1049]: <info>  [1525820102.6848] dhcp4 (eno8): dhclient started with pid 2167

May 08 18:55:02 myhost dhclient[2167]: DHCPDISCOVER on eno8 to port 67 interval 6 (xid=0x3e168505)

May 08 18:55:05 myhost NetworkManager[1049]: <info>  [1525820105.2694] device (eno8): carrier: link connected

May 08 18:55:05 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Up 10 Gbps, Flow Control: RX/TX

May 08 18:55:05 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Down

May 08 18:55:06 myhost NetworkManager[1049]: <info>  [1525820106.6215] device (eno8): carrier: link connected

May 08 18:55:06 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Up 10 Gbps, Flow Control: RX/TX

May 08 18:55:06 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Down

May 08 18:55:08 myhost dhclient[2167]: DHCPDISCOVER on eno8 to port 67 interval 6 (xid=0x3e168505)

May 08 18:55:09 myhost NetworkManager[1049]: <info>  [1525820109.8454] device (eno8): carrier: link connected

May 08 18:55:09 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Up 10 Gbps, Flow Control: RX/TX

May 08 18:55:09 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Down

May 08 18:55:10 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Up 10 Gbps, Flow Control: RX/TX

May 08 18:55:10 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Down

May 08 18:55:12 myhost NetworkManager[1049]: <info>  [1525820112.1334] device (eno8): carrier: link connected

May 08 18:55:12 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Up 10 Gbps, Flow Control: RX/TX

May 08 18:55:12 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Down

May 08 18:55:14 myhost dhclient[2167]: DHCPDISCOVER on eno8 to port 67 interval 11 (xid=0x3e168505)

May 08 18:55:14 myhost NetworkManager[1049]: <info>  [1525820114.4214] device (eno8): carrier: link connected

May 08 18:55:14 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Up 10 Gbps, Flow Control: RX/TX

May 08 18:55:14 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Down

May 08 18:55:16 myhost NetworkManager[1049]: <info>  [1525820116.0876] device (eno8): carrier: link connected

May 08 18:55:16 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Up 10 Gbps, Flow Control: RX/TX

May 08 18:55:16 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Down

May 08 18:55:18 myhost NetworkManager[1049]: <info>  [1525820118.7916] device (eno8): carrier: link connected

May 08 18:55:18 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Up 10 Gbps, Flow Control: RX/TX

May 08 18:55:18 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Down

May 08 18:55:20 myhost NetworkManager[1049]: <info>  [1525820120.4555] device (eno8): carrier: link connected

May 08 18:55:20 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Up 10 Gbps, Flow Control: RX/TX

May 08 18:55:20 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Down

May 08 18:55:20 myhost NetworkManager[1049]: <info>  [1525820120.8715] device (eno8): carrier: link connected

May 08 18:55:20 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Up 10 Gbps, Flow Control: RX/TX

May 08 18:55:20 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Down

May 08 18:55:22 myhost NetworkManager[1049]: <info>  [1525820122.2234] device (eno8): carrier: link connected

May 08 18:55:22 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Up 10 Gbps, Flow Control: RX/TX

May 08 18:55:22 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Down

May 08 18:55:23 myhost NetworkManager[1049]: <info>  [1525820123.5754] device (eno8): carrier: link connected

May 08 18:55:23 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Up 10 Gbps, Flow Control: RX/TX

May 08 18:55:23 myhost kernel: ixgbe 0000:04:00.1 eno8: NIC Link is Down

Dmesg is much the same as above.




lspci | grep Eth

04:00.0 Ethernet controller: Intel Corporation Ethernet Connection X552 10 GbE SFP+

04:00.1 Ethernet controller: Intel Corporation Ethernet Connection X552 10 GbE SFP+

          Doogee X70 Affordable Smartphone First Impressions & Quick Review      Cache   Translate Page

Best Deal Alert! You can now buy the new Doogee X70 Affordable and Durable Smartphone from for only $69.99. Hurry up, limited time offer!

Meet Doogee X70 Affordable Smartphone
The Doogee X70 is now officially available on shopping stores for a giveaway price. The device is undoubtedly the world’s cheapest smartphone with the trending notch display. The X70 comes with good dual rear cameras setup feature, 2GB RAM, 16GB storage space and a massive 4000mAh battery. The X70 has a Plastic cover back, just like what we typically see on most budget-friendly offerings from Doogee. Topping it all off, the company included the trending iPhone X-like dual shooters. The X70 is built on a 5.5-inch HD IPS LCD capacitive touchscreen. This boasts 540 x 1132 pixels in resolution and 228 pixel per inch density. Thanks to the notch, it stretches up to 76.8% screen-to-body and has 1000:1 contrast ratio.

With 2GB RAM, 16GB ROM and 128GB extendable storage, you are no longer to worry about the insufficient space for all the wonderful photos, videos and movies. X70 brings a 5.5-inch screen with mellow 2.5D edge and stunning IPS technology to provide a most comfortable grip. Feel your heartbeat at the moment you touch every inch of the 232PPI bright display from any angle. A slim body can hold breathtaking power inside. X70 houses 4000mAh notably large Li-polymer battery with superb duration of 300h standby time. It has 20% more capacity than the previous generation’s, keeping X70 online longer.

Camera is a window to discover the beauty of nature. X70 carries 5MP and 8MP dual rear cameras with 5 shooting modes to capture more professional and clearer portraits, whether at dawn, dusk or night. Love the pure status of life, from this moment X70 snaps. Just choose your mood: Mono, Professional, Blur, FaceBeauty or Panorama.

Here you can read more about the unboxing of Doogee X70 Smartphone.

Next, watch the first impressions and the first experience on Android Oreo 8.1.0 OS
Firmware tested: Android Oreo 8.1.0 stock
Build number: DOOGEE-X70-Android8.1-20180804
Kernel version:  3.18.79+
Android Patch Level: July 5, 2018
Firmware version: DOOGEE-X70-Android8.1-20180804_20180804-1731
Hardware: mt6580
Board: k80_bsp
API Level: 27
Java VM: ART 2.1.0
Root: Firmware is not rooted.


Available features: Face Unlock & Recognition, U-notch Screen, Dual SIM Dual Standby (Nano SIM + Nano SIM)Radio FM, WiFi, WiFi Hotspot, Bluetooth 4.0,  A-GPS, GPS, Microphone, Front Camera & Dual Rear Camera, LED Flash, USB Host, OTA Sync, Printing, Voice Search, Sound Recorder, Calculator, FlashLight Torch, Airplane Mode, App Widgets, Audio Jack Output, USB Accessory, OTG, Multitouch, Audio Profiles, Google Play Store & gapps, Adaptive brightness, BesLoudness Sound Enhancement, Battery Saver mode, Standby intelligent power saving, Fingerprint, Gesture Motion (Telephony Motion - Turn to silence, System Motion - Three point screenshot, Three point entry camera, Two point adjust volume, double tab to lock & Smart Somatosensory - Launcher, Camera, Music Player Unlock), Fast Capture, DuraSpeed, ScanCode, Navigationbar, Display area control, System Manager & Developer Options.

Sensors available on device: Fingerprint, Accelerometer, Light & Proximity

CPU: Mediatek MT6580A Quad-core Cortex-A7 @ 1.3GHz
GPU: ARM Mali-400 MP.

Cameras. X70 carries 5MP and 8MP dual rear cameras with 5 shooting modes to capture more professional and clearer portraits, whether at dawn, dusk or night: Mono, Professional, Blur, FaceBeauty or Panorama. F/2.2; AF; 80° Angle & LED Flash. Front camera: 5.0MP, F/2.2 & 80° Angle. You can expect image quality and rendering power that every photo fanatic deserves from a phone. Enjoy Capturing!

Display. Size: 5.5-inch U-notch 2.5D. Resolution: 540*1132 IPS. Pixel Density: 232ppi. Aspect Ratio: 19:9. Panel Technology: Multi-touch.

Battery. 4000mAh Li-ion with 5V/1A Charger, model BAT18724000, removable. Voltage: 3.8V 15.20Wh. Charging Voltage limit: 4.35V. Standard: GB/T18287-2013. So now you can enjoy movie marathons, launch apps, and play games without worrying about running out of battery.

Internal storage.   Capacity: 16 GB. Available Storage: 9.70 GB. (84%). SD slot up to 128 GB. Tested with  Mixza Tohaoll 64GB SDXC Micro SD Memory Card.

Cost-effective. 7.6 from 10 - pretty impressive regarding the actual low price.

In Antutu Video Tester 3.0 Doogee X70 scores 650 points, this firmware version fully supported 14 video files, partially supported 9 video files and didn't managedf to play 7 video file types.

Wifi Internet speed tested gets also pretty good values: almost 37 Mbps in download and the upload speed was over 26 Mbps. The router used in this test is Xiaomi Mi WiFi Router, mounted in another room. 

In use you may feel some lag, but it is to be expected that this will be remedied with further updates.

Android 8.1 Oreo OS on the Doogee X70 runs pretty well - transitions, animations, apps switching and general OS fluidity is good, the WiFi speed is very good also. During our tests, we experienced zero force closes, hangups or stability issues. Google Play and gapps are working without issue. Google Play Services updated no problem. 3rd party apps downloaded and updated without issues.

Doogee X70 has now an amazing affordable price and very good value for the money, go ahead and buy it from here!

Photo gallery full album here. Official webpage here.

Unboxing Doogee S55 Rugged Smartphone – Rugged Inside Out – here. Here you can read about yhe unboxing of Doogee S60 Lite. Here you can read about the unboxing of Doogee BL7000 4G Phablet.

About Doogee DOOGEE’s new slogan is “Live Your Life”. By delivering fashionable products with new technology, DOOGEE is targeting to be the most popular smartphone supplier in the world. DOOGEE always transmits enthusiasm and positive attitude towards life, focusing on improving user experience and bringing people more convenience and joy.

Don’t miss any of our future video tutorials, follow us on Youtube. Like us on Facebook. Add us in your circles on Google+. Watch our photo albums on Flickr. Subscribe now to our newsletter. Biggest firmware download center.

          Difference between Docker swarm and Kubernetes      Cache   Translate Page

Learn difference between Docker swarm and Kubernetes. Comparison between two container orchestration platforms in tabular manner.

Difference between Docker swarm and Kubernetes
Docker Swarm v/s Kubernetes

When you are on learning curve of application containerization, there will be a stage when you come across orchestration tools for containers. If you have started your learning with Docker then Docker swarm is the first cluster management tool you must have learnt and then Kubernetes. So its time to compare docker swarm and Kubernetes. In this article, we will quickly see what is docker, what is kubernetes and then comparison between the two.

What is Docker swarm?

Docker swarm is native tool to Docker which is aimed at clustering management of Docker containers. Docker swarm enables you to built a cluster of multi node VM of physical machines running Docker engine. In turns you will be running containers on multiple machines to facilitate HA, availability, fault tolerant environment. Its pretty much simple to setup and native to Docker.

What is Kubernetes?

Its a platform to manage containerized applications i.e. containers in cluster environment along with automation. Its does almost similar job swarm mode does but in different and enhanced way. Its developed by Google in first place and later project handed over to CNCF. It works with containers like Docker and rocket. Kubernetes installation is bit complex than Swarm.

Compare Docker and Kubernetes

If someone asks you comparison between Docker and Kubernetes then thats not a valid question in first place. You can not differentiate between Docker and Kubernetes. Docker is a engine which runs containers or itself it refers as container and Kubernetes is orchestration platform which manages Docker containers in cluster environment. So one can not compare Docker and Kubernetes.

Difference between Docker Swarm and Kubernetes

I added comparison of Swarm and Kubernetes in below table for easy readability.

          Linux Kernel Vs. Mac Kernel      Cache   Translate Page

Difference Between Linux Kernel & Mac Kernel
Both the Linux kernel and the macOS kernel are UNIX-based. Some people say that macOS is "linux", some say that both are compatible due to similarities between commands and file system hierarchy. Today I want to show a little of both, showing the differences and similarities between Linux Kernel & Mac kernel like I mentioned in previous Linux kernel articles.

Kernel of macOS

In 1985, Steve Jobs left Apple due to a disagreement with CEO John Sculley and Apple's board of directors. He then founded a new computer company called NeXT. Jobs wanted a new computer (with a new operating system) to be released quickly. To save time, the NeXT team used the Carnegie Mellon Mach kernel and parts of the BSD code base to create the NeXTSTEP operating system.
NeXTSTEP desktop operating system
NeXT has never become a financial success, in part due to Jobs's habit of spending money as if he were still at Apple. Meanwhile, Apple tried unsuccessfully to update its operating system on several occasions, even partnering with IBM. In 1997, Apple bought NeXT for $429 million. As part of the deal, Steve Jobs returned to Apple and NeXTSTEP became the foundation of macOS and iOS.

Linux kernel

Unlike the macOS kernel, Linux was not created as part of a commercial enterprise. Instead, it was created in 1991 by computer student Linus Torvalds. Originally, the kernel was written according to the specifications of Linus's computer because he wanted to take advantage of his new 80386 processor. Linus posted the code for his new kernel on the web in August 1991. Soon, he was receiving code and resource suggestions Worldwide. The following year, Orest Zborowski ported the X Windows System to Linux, giving it the ability to support a graphical user interface.

MacOS kernel resources

The macOS kernel is officially known as XNU. The acronym stands for "XNU is Not Unix." According to Apple's official Github page, XNU is "a hybrid kernel that combines the Mach kernel developed at Carnegie Mellon University with FreeBSD and C++ components for the drivers." The BSD subsystem part of the code is "normally implemented as userspace servers in microkernel systems". The Mach part is responsible for low-level work such as multitasking, protected memory, virtual memory management, kernel debugging support, and console I/O.
macos kernel resources
Map of MacOS: the heart of everything is called Darwin; and within it, we have separate system utilities and the XNU kernel, which is composed in parts by the Mach kernel and by the BSD kernel.

Unlike Linux, this kernel is split into what they call the hybrid kernel, allowing one part of it to stop for maintenance, while another continues to work. In several debates this also opened the question of the fact that a hybrid kernel is more stable; if one of its parts stops, the other can start it again.

Linux kernel resources

While the macOS kernel combines the capabilities of a microkernel with Mach and a monolithic kernel like BSD, Linux is just a monolithic kernel. A monolithic kernel is responsible for managing CPU, memory, inter-process communication, device drivers, file system, and system service calls. That is, it does everything without subdivisions.

Obviously, this has already garnered much discussion even with Linus himself and other developers, who claim that a monolithic kernel is more susceptible to errors besides being slower; but Linux is the opposite of this every year, and can be optimized as a hybrid kernel. In addition, with the help of RedHat, the kernel now includes a Live Patch that allows real-time maintenance with no reboot required.

Differences between MacOS Kernel (XNU) and Linux

  1. The MacOS kernel (XNU) has existed for longer than Linux and was based on a combination of two even older code bases. This weighs in favor, for stability and history.
  2. On the other hand, Linux is newer, written from scratch and used on many other devices; so much that it is present in all 500 best among the best supercomputers and in the recently inaugurated North American supercomputer.

​In the system scope, we do not have a package manager via the command line in the macOS terminal.
The installation of the packages in .pkg format - such as BSD - is via this command line, if not through the GUI:
$ sudo installer -pkg /path/to/package.pkg -target /
NOTE: MacOS .pkg is totally different from BSD .pkg!
Do not think that macOS supports BSD programs and vice versa. It does not support and does not install.
You can have a command equivalent to apt in macOS, under 2 options: Installing Homebrew or MacPorts.  In the end, you will have the following syntax:
$ brew install PACKAGE
$ port install PACKAGE
Remember that not all programs/packages available for Linux or BSD will be in MacOS Ports.


In terms of compatibility, there is not much to say; the Darwin core and the Linux kernel are as distinct as comparing the Windows NT kernel with the BSD kernel. Drivers written for Linux do not run on macOS and vice versa. They must be compiled beforehand; Curiously, Linux has a series of macOS daemons, including the CUPS print server!

What we have in common compatibility are, in fact, terminal tools like GNU Utils packages or Busybox, so we have not only BASH but also gcc, rm, dd, top, nano, vim, etc. And this is intrinsic to all UNIX-based applications. In addition, we have the filesystem folders architecture, common folders common to root in /, / lib, / var, / etc, / dev, and so on.


MacOS and Linux have their similarities and differences, just like BSD compared to Linux. But because they are based on UNIX, they share patterns that make them familiar to the environment. Those who use Linux and migrate pro macOS or vice versa will be familiar with a number of commands and features. The most striking difference would be the graphical interface, whose problem would be a matter of personal adaptation.

          6 open source tools for making your own VPN      Cache   Translate Page

Want to try your hand at building your own VPN but aren’t sure where to start?

scrabble letters used to spell "VPN"
Image credits : 

Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
If you want to try your hand at building your own VPN but aren’t sure where to start, you’ve come to the right place. I’ll compare six of the best free and open source tools to set up and use a VPN on your own server. These VPNs work whether you want to set up a site-to-site VPN for your business or just create a remote access proxy to unblock websites and hide your internet traffic from ISPs.
Which is best depends on your needs and limitations, so take into consideration your own technical expertise, environment, and what you want to achieve with your VPN. In particular, consider the following factors:
  • VPN protocol
  • Number of clients and types of devices
  • Server distro compatibility
  • Technical expertise required


Algo was designed from the bottom up to create VPNs for corporate travelers who need a secure proxy to the internet. It “includes only the minimal software you need,” meaning you sacrifice extensibility for simplicity. Algo is based on StrongSwan but cuts out all the things that you don’t need, which has the added benefit of removing security holes that a novice might otherwise not notice.
As an added bonus, it even blocks ads! Algo supports only the IKEv2 protocol and Wireguard. Because IKEv2 support is built into most devices these days, it doesn’t require a client app like OpenVPN. Algo can be deployed using Ansible on Ubuntu (the preferred option), Windows, RedHat, CentOS, and FreeBSD. Setup is automated using Ansible, which configures the server based on your answers to a short set of questions. It’s also very easy to tear down and re-deploy on demand.
Algo is probably the easiest and fastest VPN to set up and deploy on this list. It’s extremely tidy and well thought out. If you don’t need any of the more advanced features offered by other tools and just need a secure proxy, it’s a great option. Note that Algo explicitly states it’s not meant for geo-unblocking or evading censorship, and was primarily designed for confidentiality.


Streisand can be installed on any Ubuntu 16.04 server using a single command; the process takes about 10 minutes. It supports L2TP, OpenConnect, OpenSSH, OpenVPN, Shadowsocks, Stunnel, Tor bridge, and WireGuard. Depending on which protocol you choose, you may need to install a client app.
In many ways, Streisand is similar to Algo, but it offers more protocols and customization. This takes a bit more effort to manage and secure but is also more flexible. Note Streisand does not support IKEv2. I would say Streisand is more effective for bypassing censorship in places like China and Turkey due to its versatility, but Algo is easier and faster to set up.
The setup is automated using Ansible, so there’s not much technical expertise required. You can easily add more users by sending them custom-generated connection instructions, which include an embedded copy of the server’s SSL certificate.
Tearing down Streisand is a quick and painless process, and you can re-deploy on demand.


OpenVPN requires both client and server applications to set up VPN connections using the protocol of the same name. OpenVPN can be tweaked and customized to fit your needs, but it also requires the most technical expertise of the tools covered here. Both remote access and site-to-site configurations are supported; the former is what you’ll need if you plan on using your VPN as a proxy to the internet. Because client apps are required to use OpenVPN on most devices, the end user must keep them updated.
Server-side, you can opt to deploy in the cloud or on your Linux server. Compatible distros include CentOS, Ubuntu, Debian, and openSUSE. Client apps are available for Windows, MacOS, iOS, and Android, and there are unofficial apps for other devices. Enterprises can opt to set up an OpenVPN Access Server, but that’s probably overkill for individuals, who will want the Community Edition.
OpenVPN is relatively easy to configure with static key encryption, but it isn’t all that secure. Instead, I recommend setting it up with easy-rsa, a key management package you can use to set up a public key infrastructure. This allows you to connect multiple devices at a time and protect them with perfect forward secrecy, among other benefits. OpenVPN uses SSL/TLS for encryption, and you can specify DNS servers in your configuration.
OpenVPN can traverse firewalls and NAT firewalls, which means you can use it to bypass gateways and firewalls that might otherwise block the connection. It supports both TCP and UDP transports.


You might have come across a few different VPN tools with “Swan” in the name. FreeS/WAN, OpenSwan, LibreSwan, and strongSwan are all forks of the same project, and the lattermost is my personal favorite. Server-side, strongSwan runs on Linux 2.6, 3.x, and 4x kernels, Android, FreeBSD, macOS, iOS, and Windows.
StrongSwan uses the IKEv2 protocol and IPSec. Compared to OpenVPN, IKEv2 connects much faster while offering comparable speed and security. This is useful if you prefer a protocol that doesn’t require installing an additional app on the client, as most newer devices manufactured today natively support IKEv2, including Windows, MacOS, iOS, and Android.
StrongSwan is not particularly easy to use, and despite decent documentation, it uses a different vocabulary than most other tools, which can be confusing. Its modular design makes it great for enterprises, but that also means it’s not the most streamlined. It’s certainly not as straightforward as Algo or Streisand.
Access control can be based on group memberships using X.509 attribute certificates, a feature unique to strongSwan. It supports EAP authentication methods for integration into other environments like Windows Active Directory. StrongSwan can traverse NAT firewalls.


SoftEther started out as a project by a graduate student at the University of Tsukuba in Japan. SoftEther VPN Server and VPN Bridge run on Windows, Linux, OSX, FreeBSD, and Solaris, while the client app works on Windows, Linux, and MacOS. VPN Bridge is mainly for enterprises that need to set up site-to-site VPNs, so individual users will just need the server and client programs to set up remote access.
SoftEther supports the OpenVPN, L2TP, SSTP, and EtherIP protocols, but its own SoftEther protocol claims to be able to be immunized against deep packet inspection thanks to “Ethernet over HTTPS” camouflage. SoftEther also makes a few tweaks to reduce latency and increase throughput. Additionally, SoftEther includes a clone function that allows you to easily transition from OpenVPN to SoftEther.
SoftEther can traverse NAT firewalls and bypass firewalls. On restricted networks that permit only ICMP and DNS packets, you can utilize SoftEther’s VPN over ICMP or VPN over DNS options to penetrate the firewall. SoftEther works with both IPv4 and IPv6.
SoftEther is easier to set up than OpenVPN and strongSwan but is a bit more complicated than Streisand and Algo.


WireGuard is the newest tool on this list; it's so new that it’s not even finished yet. That being said, it offers a fast and easy way to deploy a VPN. It aims to improve on IPSec by making it simpler and leaner like SSH.
Like OpenVPN, WireGuard is both a protocol and a software tool used to deploy a VPN that uses said protocol. A key feature is “crypto key routing,” which associates public keys with a list of IP addresses allowed inside the tunnel.
WireGuard is available for Ubuntu, Debian, Fedora, CentOS, MacOS, Windows, and Android. WireGuard works on both IPv4 and IPv6.
WireGuard is much lighter than most other VPN protocols, and it transmits packets only when data needs to be sent.
The developers say WireGuard should not yet be trusted because it hasn’t been fully audited yet, but you’re welcome to give it a spin. It could be the next big thing!

Homemade VPN vs. commercial VPN

Making your own VPN adds a layer of privacy and security to your internet connection, but if you’re the only one using it, then it would be relatively easy for a well-equipped third party, such as a government agency, to trace activity back to you.
Furthermore, if you plan to use your VPN to unblock geo-locked content, a homemade VPN may not be the best option. Since you’ll only be connecting from a single IP address, your VPN server is fairly easy to block.
Good commercial VPNs don’t have these issues. With a provider like ExpressVPN, you share the server’s IP address with dozens or even hundreds of other users, making it nigh-impossible to track a single user’s activity. You also get a huge range of hundreds or thousands of servers to choose from, so if one has been blacklisted, you can just switch to another.
The tradeoff of a commercial VPN, however, is that you must trust the provider not to snoop on your internet traffic. Be sure to choose a reputable provider with a clear no-logs policy.

          Understanding the State of Container Networking      Cache   Translate Page

Containers have revolutionized the way applications are developed and deployed, but what about the network?

By Sean Michael Kerner | Posted Sep 4, 2018
Container networking is a fast moving space with lots of different pieces. In a session at the Open Source Summit, Frederick Kautz, principal software engineer at Red Hat outlined the state of container networking today and where it is headed in the future.

Containers have become increasingly popular in recent years, particularly the use of Docker containers, but what exactly are containers?

Kautz explained the containers make use of the Linux kernel's ability to allow for multiple isolated user space areas. The isolation features are enabled by two core elements cGroups and Namespaces. Control Groups (cGroups) limit and isolate the resource usage of process groups, while namespaces partition key kernel structures for process, hostname, users and network functions.

Container Networking Types

While there are different container technologies and orchestration systems, when it comes to networking, Kautz said there are really just four core networking primitives:

Bridge mode is when networking is hooked into a specific bridge and everyone that is on the bridge will get the messages.

Kautz explained that Host mode is basically where the container uses the same networking space as the host. As such, whatever IP address the host has, those addresses are then shared with the containers.

In an Overlay networking approach, a virtual networking model sits on top of the underlay and the physical networking hardware.

The Underlay approach makes use of core fabric and hardware network.

To make matters somewhat more confusing Kautz said that multiple container networking models are often used together, for example a bridge together with an overlay.

Network Connections

Additionally, container networking models can benefit from MACVLAN and IPVLANs which tie containers to specific mac or IP addresses, for additional isolation

 Kautz added that SR-IOV is a hardware mechanism that ties a physical Network Interface Card (NIC) to containers providing direct access.
Container Networking


On top of the different container networking models are different approaches for Software Defined Networking. For the management plane, there are functionally two core approaches tat this point, the Container Networking Interface (CNI) which is what is used by Kubernetes and the libnetwork interface that is used by Docker.

Kautz noted that with Docker recently announcing support for Kubernetes, it's likely that CNI support will be following as well.

Among the different technologies for container networking today are:

Contiv - backed by Cisco and provides a VXLNA overlay model

Flannel/Calico - backed by Tigera provides an overlay network between each hosted and allocates a separate subnet per host.

Weave - backed by Weaveworks, uses standard port number for containers

Contrail - backed by Juniper networks and open sourced as the TungstenFabric project, provides policy support and gateway services.

OpenDaylight - open source effort that integrates with OpenStack Kuryr

OVN - open source effort that creates logical switches and routers.

Upcoming Efforts

While there are already multiple production grade solutions for container networking, the technology continues to evolve. Among the newer approach is using eBPF (extended Berkeley Packet Filter) for networking control, which is used by the Cilium open source project.

Additionally there is an effort to use shared memory, rather than physical NICs to help enable networking. Kautz also highlighted the emerging area of service mesh technology, in particular the Istio project, which is backed by Google. With a service mesh, networking is offloaded to the mesh, which provides load balancing, failure recovery and service discovery among other capabilities.

Organizations today typically choose a single SDN approach that will connect into a Kubernetes CNI, but that could change in the future thanks to the Multus CNI effort. With Multus CNI multiple CNI plugins can be used, enabling multiple SDN technologies to run in a Kubernetes cluster.

Sean Michael Kerner is a senior editor at EnterpriseNetworkingPlanet and Follow him on Twitter @TechJournalist.

          8 Linux commands for effective process management      Cache   Translate Page

Manage your applications throughout their lifecycles with these key commands.

Command line prompt
Image by :

Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
Generally, an application process' lifecycle has three main states: start, run, and stop. Each state can and should be managed carefully if we want to be competent administrators. These eight commands can be used to manage processes through their lifecycles.

Starting a process

The easiest way to start a process is to type its name at the command line and press Enter. If you want to start an Nginx web server, type nginx. Perhaps you just want to check the version.

alan@workstation:~$ nginx

alan@workstation:~$ nginx -v

nginx version: nginx/1.14.0

Viewing your executable path

The above demonstration of starting a process assumes the executable file is located in your executable path. Understanding this path is key to reliably starting and managing a process. Administrators often customize this path for their desired purpose. You can view your executable path using echo $PATH.

alan@workstation:~$ echo $PATH



Use the which command to view the full path of an executable file.

alan@workstation:~$ which nginx                                                    


I will use the popular web server software Nginx for my examples. Let's assume that Nginx is installed. If the command which nginx returns nothing, then Nginx was not found because which searches only your defined executable path. There are three ways to remedy a situation where a process cannot be started simply by name. The first is to type the full path. Although, I'd rather not have to type all of that, would you?

alan@workstation:~$ /home/alan/web/prod/nginx/sbin/nginx -v

nginx version: nginx/1.14.0

The second solution would be to install the application in a directory in your executable's path. However, this may not be possible, particularly if you don't have root privileges. The third solution is to update your executable path environment variable to include the directory where the specific application you want to use is installed. This solution is shell-dependent. For example, Bash users would need to edit the PATH= line in their .bashrc file.
Now, repeat your echo and which commands or try to check the version. Much easier!

alan@workstation:~$ echo $PATH


alan@workstation:~$ which nginx


alan@workstation:~$ nginx -v                                                

nginx version: nginx/1.14.0

Keeping a process running


A process may not continue to run when you log out or close your terminal. This special case can be avoided by preceding the command you want to run with the nohup command. Also, appending an ampersand (&) will send the process to the background and allow you to continue using the terminal. For example, suppose you want to run
nohup &
One nice thing nohup does is return the running process's PID. I'll talk more about the PID next.

Manage a running process

Each process is given a unique process identification number (PID). This number is what we use to manage each process. We can also use the process name, as I'll demonstrate below. There are several commands that can check the status of a running process. Let's take a quick look at these.


The most common is ps. The default output of ps is a simple list of the processes running in your current terminal. As you can see below, the first column contains the PID.

alan@workstation:~$ ps


23989 pts/0    00:00:00 bash

24148 pts/0    00:00:00 ps

I'd like to view the Nginx process I started earlier. To do this, I tell ps to show me every running process (-e) and a full listing (-f).

alan@workstation:~$ ps -ef


root         1     0  0 Aug18 ?        00:00:10 /sbin/init splash

root         2     0  0 Aug18 ?        00:00:00 [kthreadd]

root         4     2  0 Aug18 ?        00:00:00 [kworker/0:0H]

root         6     2  0 Aug18 ?        00:00:00 [mm_percpu_wq]

root         7     2  0 Aug18 ?        00:00:00 [ksoftirqd/0]

root         8     2  0 Aug18 ?        00:00:20 [rcu_sched]

root         9     2  0 Aug18 ?        00:00:00 [rcu_bh]

root        10     2  0 Aug18 ?        00:00:00 [migration/0]

root        11     2  0 Aug18 ?        00:00:00 [watchdog/0]

root        12     2  0 Aug18 ?        00:00:00 [cpuhp/0]

root        13     2  0 Aug18 ?        00:00:00 [cpuhp/1]

root        14     2  0 Aug18 ?        00:00:00 [watchdog/1]

root        15     2  0 Aug18 ?        00:00:00 [migration/1]

root        16     2  0 Aug18 ?        00:00:00 [ksoftirqd/1]

alan     20506 20496  0 10:39 pts/0    00:00:00 bash

alan     20520  1454  0 10:39 ?        00:00:00 nginx: master process nginx

alan     20521 20520  0 10:39 ?        00:00:00 nginx: worker process

alan     20526 20506  0 10:39 pts/0    00:00:00 man ps

alan     20536 20526  0 10:39 pts/0    00:00:00 pager

alan     20564 20496  0 10:40 pts/1    00:00:00 bash

You can see the Nginx processes in the output of the ps command above. The command displayed almost 300 lines, but I shortened it for this illustration. As you can imagine, trying to handle 300 lines of process information is a bit messy. We can pipe this output to grep to filter out nginx.

alan@workstation:~$ ps -ef |grep nginx

alan     20520  1454  0 10:39 ?        00:00:00 nginx: master process nginx

alan     20521 20520  0 10:39 ?        00:00:00 nginx: worker process

That's better. We can quickly see that Nginx has PIDs of 20520 and 20521.


The pgrep command was created to further simplify things by removing the need to call grep separately.

alan@workstation:~$ pgrep nginx



Suppose you are in a hosting environment where multiple users are running several different instances of Nginx. You can exclude others from the output with the -u option.

alan@workstation:~$ pgrep -u alan nginx




Another nifty one is pidof. This command will check the PID of a specific binary even if another process with the same name is running. To set up an example, I copied my Nginx to a second directory and started it with the prefix set accordingly. In real life, this instance could be in a different location, such as a directory owned by a different user. If I run both Nginx instances, the ps -ef output shows all their processes.

alan@workstation:~$ ps -ef |grep nginx

alan     20881  1454  0 11:18 ?        00:00:00 nginx: master process ./nginx -p /home/alan/web/prod/nginxsec

alan     20882 20881  0 11:18 ?        00:00:00 nginx: worker process

alan     20895  1454  0 11:19 ?        00:00:00 nginx: master process nginx

alan     20896 20895  0 11:19 ?        00:00:00 nginx: worker process

Using grep or pgrep will show PID numbers, but we may not be able to discern which instance is which.

alan@workstation:~$ pgrep nginx





The pidof command can be used to determine the PID of each specific Nginx instance.

alan@workstation:~$ pidof /home/alan/web/prod/nginxsec/sbin/nginx

20882 20881

alan@workstation:~$ pidof /home/alan/web/prod/nginx/sbin/nginx

20896 20895


The top command has been around a long time and is very useful for viewing details of running processes and quickly identifying issues such as memory hogs. Its default view is shown below.

top - 11:56:28 up 1 day, 13:37,  1 user,  load average: 0.09, 0.04, 0.03

Tasks: 292 total,   3 running, 225 sleeping,   0 stopped,   0 zombie

%Cpu(s):  0.1 us,  0.2 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st

KiB Mem : 16387132 total, 10854648 free,  1859036 used,  3673448 buff/cache

KiB Swap:        0 total,        0 free,        0 used. 14176540 avail Mem


17270 alan      20   0 3930764 247288  98992 R   0.7  1.5   5:58.22 gnome-shell

20496 alan      20   0  816144  45416  29844 S   0.5  0.3   0:22.16 gnome-terminal-

21110 alan      20   0   41940   3988   3188 R   0.1  0.0   0:00.17 top

    1 root      20   0  225564   9416   6768 S   0.0  0.1   0:10.72 systemd

    2 root      20   0       0      0      0 S   0.0  0.0   0:00.01 kthreadd

    4 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 kworker/0:0H

    6 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 mm_percpu_wq

    7 root      20   0       0      0      0 S   0.0  0.0   0:00.08 ksoftirqd/0

The update interval can be changed by typing the letter s followed by the number of seconds you prefer for updates. To make it easier to monitor our example Nginx processes, we can call top and pass the PID(s) using the -p option. This output is much cleaner.

alan@workstation:~$ top -p20881 -p20882 -p20895 -p20896

Tasks:   4 total,   0 running,   4 sleeping,   0 stopped,   0 zombie

%Cpu(s):  2.8 us,  1.3 sy,  0.0 ni, 95.9 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st

KiB Mem : 16387132 total, 10856008 free,  1857648 used,  3673476 buff/cache

KiB Swap:        0 total,        0 free,        0 used. 14177928 avail Mem


20881 alan      20   0   12016    348      0 S   0.0  0.0   0:00.00 nginx

20882 alan      20   0   12460   1644    932 S   0.0  0.0   0:00.00 nginx

20895 alan      20   0   12016    352      0 S   0.0  0.0   0:00.00 nginx

20896 alan      20   0   12460   1628    912 S   0.0  0.0   0:00.00 nginx

It is important to correctly determine the PID when managing processes, particularly stopping one. Also, if using top in this manner, any time one of these processes is stopped or a new one is started, top will need to be informed of the new ones.

Stopping a process


Interestingly, there is no stop command. In Linux, there is the kill command. Kill is used to send a signal to a process. The most commonly used signal is "terminate" (SIGTERM) or "kill" (SIGKILL). However, there are many more. Below are some examples. The full list can be shown with kill -L.

 1) SIGHUP       2) SIGINT       3) SIGQUIT      4) SIGILL       5) SIGTRAP

 6) SIGABRT      7) SIGBUS       8) SIGFPE       9) SIGKILL     10) SIGUSR1

11) SIGSEGV     12) SIGUSR2     13) SIGPIPE     14) SIGALRM     15) SIGTERM

Notice signal number nine is SIGKILL. Usually, we issue a command such as kill -9 20896. The default signal is 15, which is SIGTERM. Keep in mind that many applications have their own method for stopping. Nginx uses a -s option for passing a signal such as "stop" or "reload." Generally, I prefer to use an application's specific method to stop an operation. However, I'll demonstrate the kill command to stop Nginx process 20896 and then confirm it is stopped with pgrep. The PID 20896 no longer appears.

alan@workstation:~$ kill -9 20896


alan@workstation:~$ pgrep nginx






The command pkill is similar to pgrep in that it can search by name. This means you have to be very careful when using pkill. In my example with Nginx, I might not choose to use it if I only want to kill one Nginx instance. I can pass the Nginx option -s stop to a specific instance to kill it, or I need to use grep to filter on the full ps output.

/home/alan/web/prod/nginx/sbin/nginx -s stop

/home/alan/web/prod/nginxsec/sbin/nginx -s stop

If I want to use pkill, I can include the -f option to ask pkill to filter across the full command line argument. This of course also applies to pgrep. So, first I can check with pgrep -a before issuing the pkill -f.

alan@workstation:~$ pgrep -a nginx

20881 nginx: master process ./nginx -p /home/alan/web/prod/nginxsec

20882 nginx: worker process

20895 nginx: master process nginx

20896 nginx: worker process

I can also narrow down my result with pgrep -f. The same argument used with pkill stops the process.

alan@workstation:~$ pgrep -f nginxsec



alan@workstation:~$ pkill -f nginxsec

The key thing to remember with pgrep (and especially pkill) is that you must always be sure that your search result is accurate so you aren't unintentionally affecting the wrong processes.
Most of these commands have many command line options, so I always recommend reading the man page on each one. While most of these exist across platforms such as Linux, Solaris, and BSD, there are a few differences. Always test and be ready to correct as needed when working at the command line or writing scripts.

          Windows 10 y Windows 10 Mobile se actualizan con nuevas Builds pero sólo el primero tiene un futuro color de rosa      Cache   Translate Page      

Mobile Hemos visto cómo Microsoft prácticamente ha abandonado a su suerte a Windows Phone. Pese a unos inicios más que prometedores, Windows en móviles ha terminado siendo un despropósito de dimensiones épicas. Hemos visto cómo otras alternativas frente a iOS y Android han desaparecido: Firefox OS, Tizen en sus inicios o MeeGo son historia, pero su descalabro no puede equipararse al de Windows Phone.

El problema es que aunque no en el mismo número que las plataformas de iOS y Android, el parque de poseedores de un teléfono con Windows es el que es. Están ahí y parece que la empresa americana no quiere dejarlos tirados o no al menos por ahora. Es el único motivo con el que se puede explicar que sigan lanzando actualizaciones para Windows 10 Mobile a día de hoy cuando ni desde la empresa creen en su futuro.

Una actualización muy menor

Si tienes en tu poder un teléfono equipado con Windows 10 Mobile verás cómo te llegará en las próximas horas, si es que no te ha llegado ya, una notificación alertando de la disponibilidad de una nueva actualización.

Se trata de la Build 15063.1324, una actualización que llega centrada sobre todo en mejorar la estabilidad del sistema y agregar mejoras de funcionamiento y correcciones de errores. No esperemos encontrar novedades en forma de nuevas funciones. Esto es lo que aporta esta Build:

  • Se añaden actualizaciones de seguridad para Internet Explorer, Microsoft Edge, Microsoft scripting engine, Microsoft Graphics Component, Windows media, Windows Shell, Device Guard, Windows datacenter networking, Windows kernel, Windows hyper-V, Windows virtualization y kernel, Microsoft JET Database Engine, Windows MSXM, y Windows Server.

Una Build que no aporta nada más que actualizaciones de seguridad y mejora del funcionamiento. Si no te ha llegado aún y quieres comprobar su disponibilidad puedes hacerlo acudiendo al "Menú de Configuración" y buscar "Actualización y seguridad" para después pulsar en "Buscar actualizaciones" y esperar si detecta la Build 15063.1324 para su descarga y actualización.

Una nueva Build para Windows 10


Donde el panorama es totalmente distinto es en los sistemas de sobremesa. Lanzamientos constantes, con oleadas de Builds inlcuidas a las que ahora se suma la Build 17134.285. Estas son las mejoras que aporta:

  • Proporciona protección contra una vulnerabilidad de Specter Variant 2 ( CVE-2017-5715 ) en dispositivos que cuenten con procesadores ARM64.
  • Corrige el problema que provoca que el Asistente de compatibilidad de programas (PCA) no se ejecute de forma correcta.
  • Además añaden actualizaciones de seguridad para Internet Explorer, Microsoft Edge, motor de scripting Microsoft, Microsoft Gráficos de componentes, Windows Media, Windows Shell, Windows Hyper-V, la creación de redes de centros de datos de Windows, Windows y virtualización de núcleo, Linux, el núcleo de Windows, Microsoft motor de base de datos Jet, Windows MSXML y Windows Server.

Si quieres comprobar la disponibilidad de dicha Build, puedes descargarla acudiendo al "Menú de Configuración" y buscar "Actualización y seguridad" para después pulsar en "Buscar actualizaciones".

Fuente | Microsoft
En Xataka Windows | Microsoft se prepara para la llegada de Windows 10 October 2018 Update liberando hasta cuatro acumulativas

También te recomendamos

Los usuarios de Windows 10 Mobile ya pueden descargar la última actualización acumulativa: llega la Build 15254.490

Microsoft Launcher for Enterprise: la app que aúna personalización y seguridad en el ámbito profesional

Un par de chicos listos: smartphones y smart TVs, la revolución del ocio tecnológico mano a mano

La noticia Windows 10 y Windows 10 Mobile se actualizan con nuevas Builds pero sólo el primero tiene un futuro color de rosa fue publicada originalmente en Xataka Windows por Jose Antonio .

          Linux Drivers Getting Initial Support for Future AMD APUs      Cache   Translate Page      

Once again the open-source Linux driver's AMD has been working on are revealing some information on future products, though not necessarily a great deal at the moment. In the latest patches for Linux kernel driver, AMD has added initially support for Picasso and Raven2 APUs. In the notes for these patches, Raven2 is described as 'a new Raven APU' while Picasso is said to be 'a new API similar to raven.'

Picasso is expected to be the successor to the Raven Ridge chips and, as Phoronix notes, may be launching the end of this year as a 2019 platform. Exactly how a Raven2 APU would fit into this is hard to say though. It may also be interesting to note that the Raven2 APUs share the same PCI ID for their GPU component as the current Raven Ridge parts, while Picasso has a different PCI ID. Raven2 does have a different revision ID, golden register settings, and more so it would appear to be a more refined design, but Picasso may feature something a bit different. Time will tell, and speaking of timing, these patches being sent out now means Picasso may achieve initial enablement in time for the next kernel cycle.

Source: Phoronix

          Episode 85: Does This Make FOSS Better or Worse      Cache   Translate Page      
Does This Make FOSS Better or Worse | Ask Noah Show 85 Does the "Commons Clause" help the commons? The Commons Clause was announced recently along with several projects moving portions of their code base under it. It's an additional restriction intended to be applied to existing open source licenses with the effect of preventing the work from being sold. We play devils advocate and tell you why this might not be such a bad thing. As always your calls go to the front of the line, and we give you the details on how you can win free stuff in the Telegram group! -- The Cliff Notes -- For links to the articles and material referenced in this week's episode check out this week's page from o our podcast dashboard! This Episode's Podcast Dashboard ( Phone Systems for Ask Noah provided by Voxtelesys ( -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard ( Need more help than a radio show can offer? Altispeed provides commercial IT services and they’re excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies ( Contact Noah asknoah [at] -- Twitter -- Noah - Kernellinux ( Ask Noah Show ( Altispeed Technologies ( Jupiter Broadcasting (
          Tom commented on Debbie's blog post My experience with Apricot kernels      Cache   Translate Page      
Tom commented on Debbie's blog post My experience with Apricot kernels

          Porting Hyperkernel to the ARM architecture      Cache   Translate Page      
This work describes the porting of Hyperkernel, an x86kernel,totheARMv8-Aarchitecture.Hyperkernel was created to demonstrate various OS design decisions that are amenable to push-button verification. Hyperkernel simplifies reasoning about virtual memory by separating the kernel and user address spaces. In addition, Hyperkernel adopts an exokernel design to minimize code complexity, and thus its required proof burden. Both of Hyperkernel's design choices are accomplished through the use of x86 virtualization support. After developing an x86 prototype, advantageous designdifferencesbetweenx86andARMmotivatedus to port Hyperkernel to the ARMv8-A architecture. We explored these differences and benchmarked aspects of the new interface Hyperkernel provides on ARM to demonstrate that the ARM version of Hyperkernel should be explored further in the future. We also outline theARMv8-Aarchitectureandthevariousdesignchallenges overcome to fit Hyperkernel within the ARM programming model.

          Comment on A look at PSVita upcoming games that will likely require FW 3.69+ – Is it really worth it to lose taiHEN/h-encore to play these games? by StepS      Cache   Translate Page      
Version spoofing doesn't let you run encrypted games from higher firmwares. Because f00d (the unhacked security layer above kernel) will refuse to decrypt them.
          Sony PlayStation Vita 3.65 / 3.67 / 3.68 h-encore Kernel And User Modifications      Cache   Translate Page      
Whitepaper discussing h-encore kernel and user modifications on the Sony PlayStation Vita versions 3.65, 3.6,7, and 3.68.
          Ubuntu: SchoolTool, Lubuntu Development Newsletter, and Patches      Cache   Translate Page      
  • How to install School tool on Ubuntu 18.04 LTS

    SchoolTool is a free and open source suite of free administrative software for schools that can be used to create a simple turnkey student information system, including demographics, gradebook, attendance, calendaring and reporting for primary and secondary schools. You can easily build customized applications and configurations for individual schools or states using SchoolTool. SchoolTool is a web-based student information system specially designed for schools in the developing world, with support for localization, translation, automated deployment and updates via the Ubuntu repository.

  • Lubuntu Development Newsletter #11

    We have swapped out SMPlayer for VLC, Nomacs for LXImage-Qt, and the KDE 5 LibreOffice frontend instead of the older KDE 4 frontend. We are working on installer slideshow updates to reflect these changes.

    Walter Lapchynski is working on packaging Trojitá; that will be done soon.

    Lastly, we fixed a bug in the daily which did not properly set the GTK 3 theme when configured if no GTK theme had been configured before.

  • The First Beta of the /e/ OS to Be Released Soon, Canonical's Security Patch for Ubuntu 18.04 LTS, Parrot 4.2.2 Now Available, Open Jam 2018 Announced and Lightbend's Fast Data Platform Now on Kubernetes

    Canonical yesterday released a Linux kernel security patch for Ubuntu 18.04 LTS that addresses two recnetly discovered vulnerabilities.

read more

          AMD's Latest Linux and Free Software Work      Cache   Translate Page      
  • AMD Sends Out Initial Open-Source Linux Graphics Support For "Picasso" APUs

    Adding to the exciting week for AMD open-source Linux graphics is that in addition to the long-awaited patch update for FreeSync/Adaptive-Sync/VRR, patches for the Linux kernel were sent out prepping the graphics upbringing for the unreleased "Picasso" APUs.

    Picasso APUs are rumored to be similar to Raven Ridge APUs and would be for the AM4 socket. Picasso might launch in Q4 but intended as a 2019 platform for AM4 desktops as well as a version for notebooks. It's not expected that Picasso will be too much greater than the current Raven Ridge parts.

  • AMD's Marek Olšák Is Dominating Mesa Open-Source GPU Driver Development This Year

    With Q3 coming towards an end, here is a fresh look at the Mesa Git development trends for the year-to-date. Mesa on a commit basis is significantly lower than in previous years, but there is a new top contributor to Mesa.

    Mesa as of today is made up of 6,101 files that comprise of 2,492,887 lines of code. Yep, soon it will break 2.5 million lines. There have been 104,754 commits to Mesa from roughly 900 authors.

  • AMD Lands Mostly Fixes In Latest Batch Of AMDVLK/XGL/PAL Code Updates

    The AMD developers maintaining their "AMDVLK" Vulkan driver have pushed out their latest batch of code comprising this driver including the PAL abstraction layer, XGL Vulkan bits, and LLPC LLVM-based compiler pipeline.

read more

          LWN on Security: Updates, fs-verity, Spectre, Qubes OS/CopperheadOS      Cache   Translate Page      
  • Security updates for Wednesday
  • Protecting files with fs-verity

    The developers of the Android system have, among their many goals, the wish to better protect Android devices against persistent compromise. It is bad if a device is taken over by an attacker; it's worse if it remains compromised even after a reboot. Numerous mechanisms for ensuring the integrity of installed system files have been proposed and implemented over the years. But it seems there is always room for one more; to fill that space, the fs-verity mechanism is being proposed as a way to protect individual files from malicious modification.

    The core idea behind fs-verity is the generation of a Merkle tree containing hashes of the blocks of a file to be protected. Whenever a page of that file is read from storage, the kernel ensures that the hash of the page in question matches the hash in the tree. Checking hashes this way has a number of advantages. Opening a file is fast, since the entire contents of the file need not be hashed at open time. If only a small portion of the file is read, the kernel never has to bother reading and checking the rest. It is also possible to catch modifications made to the file after it has been opened, which will not be caught if the hash is checked at open time.

  • Strengthening user-space Spectre v2 protection

    The Spectre variant 2 vulnerability allows the speculative execution of incorrect (in an attacker-controllable way) indirect branch predictions, resulting in the ability to exfiltrate information via side channels. The kernel has been reasonably well protected against this variant since shortly after its disclosure in January. It is, however, possible for user-space processes to use Spectre v2 to attack each other; thus far, the mainline kernel has offered relatively little protection against such attacks. A recent proposal from Jiri Kosina may change that situation, but there are still some disagreements around the details.

    On relatively recent processors (or those with suitably patched microcode), the "indirect branch prediction barrier" (IBPB) operation can be used to flush the branch-prediction buffer, removing any poisoning that an attacker might have put there. Doing an IBPB whenever the kernel switches execution from one process to another would defeat most Spectre v2 attacks, but IBPB is seen as being expensive, so this does not happen. Instead, the kernel looks to see whether the incoming process has marked itself as being non-dumpable, which is typically only done by specialized processes that want to prevent secrets from showing up in core dumps. In such cases, the process is deemed to be worth protecting and the IBPB is performed.

    Kosina notes that only a "negligible minority" of the code running on Linux systems marks itself as non-dumpable, so user space on Linux systems is essentially unprotected against Spectre v2. The solution he proposes is to use IBPB more often. In particular, the new code checks whether the outgoing process would be able to call ptrace() on the incoming process. If so, the new process can keep no secrets from the old one in any case, so there is no point in executing an IBPB operation. In cases where ptrace() would not succeed, though, the IBPB will happen.

  • Life behind the tinfoil curtain

    Security and convenience rarely go hand-in-hand, but if your job (or life) requires extraordinary care against potentially targeted attacks, the security side of that tradeoff may win out. If so, running a system like Qubes OS on your desktop or CopperheadOS on your phone might make sense, which is just what Konstantin Ryabitsev, Linux Foundation (LF) director of IT security, has done. He reported on the experience in a talk [YouTube video] entitled "Life Behind the Tinfoil Curtain" at the 2018 Linux Security Summit North America.

    He described himself as a "professional Russian hacker" from before it became popular, he said with a chuckle. He started running Linux on the desktop in 1998 (perhaps on Corel Linux, which he does not think particularly highly of) and has been a member of the LF staff since 2011. He has been running Qubes OS on his main workstation since August 2016 and CopperheadOS since September 2017. He stopped running CopperheadOS in June 2018 due to the upheaval at the company, but he hopes to go back to it at some point—"maybe".

read more

          Parrot Security 4.2.2 - Security GNU/Linux Distribution Designed with Cloud Pentesting and IoT Security in Mind      Cache   Translate Page      

Updated kernel and core packages

Parrot 4.2 is powered by the latest Linux 4.18 debianized kernel with all the usual wireless patches.
A new version of the Debian-Installer now powers our netinstall images and the standard Parrot images.
Firmware packages were updated to add broader hardware support, including wireless devices and AMD vega graphics.

AppArmor and Firejail profiles were adjusted to offer a good compromise of security and usability for most of the desktop and CLI applications and services.

Important destkop updates

Parrot 4.2 now provides the latest libreoffice 6.1 release, Firefox 62 and many other important updates.

Desktop users will also find useful the inclusion of default .vimrc and .emacs config files with syntax highlight and line number columns.

Important tools updates

Armitage was finally updated and fixed, and the “missing RHOSTS error” was fixed.

We also imported the latest Metasploit 4.17.11 version. Wireshark 2.6, hashcat 4.2, edb-debugger 1.0 and many many other updated tools.

New documentation portal

The new documentation portal can be visited here feel free to contribute and expand the documentation by sending a push request on

Download Parrot Security 4.2.2

          Alpine Linux 3.8.1 发布,轻量级 Linux 发行版      Cache   Translate Page      

Alpine Linux 3.8.1 已发布,Alpine Linux 是一个基于 musl libc 和 busybox 的面向安全应用的轻量级 Linux 发行版。

这是 v3.8 稳定分支的 bug 修复版本,基于 linux kernel 4.14.69。包含 apk-tools 的重要安全更新,修复潜在的远程执行。


  • py-sphinx_rtd_theme 0.4.0

  • php 7.2.8 和 5.6.37

  • postgresql 10.5

  • redis 4.0.11

  • ghostscript 9.24

  • docker 18.06.1

  • nodejs 8.11.4

  • apk-tools 2.10.1

  • ……



          OTP 21.0.9 发布,Erlang 编写的应用服务器      Cache   Translate Page      

OTP 21.0.9 已发布,OTP (Open Telecom Platform) 是一个用 Erlang 编写的应用服务器,它是一套 Erlang 库,由 Erlang 运行时系统、主要使用 Erlang 编写的许多随时可用的组件以及 Erlang 程序的一组设计原则组成。


  • compiler-7.2.4

  • erts-10.0.8


  • asn1-5.0.6

  • common_test-1.16

  • crypto-4.3.2

  • debugger-4.2.5

  • dialyzer-3.3

  • diameter-2.1.5

  • edoc-0.9.3

  • eldap-1.2.4

  • erl_docgen-0.8

  • erl_interface-3.10.3

  • et-1.6.2

  • eunit-2.3.6

  • ftp-1.0

  • hipe-3.18

  • inets-7.0.1

  • jinterface-1.9

  • kernel-6.0.1

  • megaco-3.18.3

  • mnesia-4.15.4

  • observer-2.8

  • odbc-2.12.1

  • os_mon-2.4.5

  • otp_mibs-1.2

  • parsetools-2.1.7

  • public_key-1.6.1

  • reltool-0.7.6

  • runtime_tools-1.13

  • sasl-3.2

  • snmp-5.2.11

  • ssh-4.7

  • ssl-9.0.1

  • stdlib-3.5.1

  • syntax_tools-2.1.5

  • tftp-1.0

  • tools-3.0

  • wx-1.8.4

  • xmerl-1.3.17


          Git 2.19.0-r1      Cache   Translate Page      
[Name] Git
[Summary] Git is a popular version control system designed to handle very large projects with speed and efficiency; it is used mainly for various open source projects, most notably the Linux kernel.
[Description] Git is distributed version control system focused on speed, effectivity and real-world usability on large projects. Its highlights include:
Strong support for non-linear development.
Distributed development.
Efficient handling of large projects.
Cryptographic authentication of history.
Toolkit design.

          Timothy Bohrer one of five to be inducted into Packaging & Processing Hall of Fame      Cache   Translate Page      
Timothy Bohrer, Hall of Fame Inductee
Next time you enjoy the convenience of a bag of popcorn kernels quickly popped in the microwave, you can thank Tim Bohrer, CPP, founder of Pac Advantage Consulting, LLC.

Among his many accomplishments in technical innovation and his contributions to the packaging industry, this PMMI Packaging & Processing Hall of Fame inductee led the team that developed and commercialized the first metallized film susceptor packages in the early ’80s. “The first few susceptors were simple in concept, but complicated in execution,” says Bohrer. “But over the years, we developed a very large patent portfolio of technologies and kept pushing the envelope.”

During his 45 years in packaging, Bohrer has been named inventor or co-inventor of 17 patents for microwave packaging technology, composite containers, barrier film, thermoforming, and other technologies. While leading the team that made a whole new market possible through the use of susceptor technology is certainly a highlight of his career, Bohrer can also list a number of others, made possible, he says, by supervisors and managers who gave him the latitude to “stretch.”

One highpoint, he shares, came when he was less than two years out of school, working at American Can. Asked to assess tubular water cooling technologies for blown film, Bohrer scouted out the best option for the company and laid the groundwork for a pilot line that eventually became a part of American Can and its successors’ process. “At a very early stage in my career, having a chance to lead that process and be trusted to go out and find something and make judgements was a very big deal to me.”

Packaging education, in fact, is near and dear to Bohrer’s heart. For 20 years, he was a Clemson University Packaging Science Advisory Board member, ramping up his participation after starting his consulting business in 2008 as a way to give back. “I really enjoyed my time working with the folks at Clemson,” he says. “I had a chance to meet with a lot of young people and give them direction and advice. I think it’s crucial for packaging companies to take packaging education seriously, even if all they do is hook up with a two-year junior college in their area that teaches the technologies and skills that are needed.”

Especially valuable are internships, he adds. After all, it was an internship at American Can during the summer before his senior year of college that led Bohrer down the packaging industry path. “That’s where I learned the interesting and challenging things that could be done in packaging, and I told myself, ‘This is the company I want to work for. This is the kind of stuff I want to do,’” he recalls.

After finishing his undergrad work in chemical engineering at Michigan Tech University, Bohrer got his Master of Science in Chemical Engineering from Purdue University, then returned triumphant to American Can.

Today, Bohrer says he has the luxury of only working on consulting jobs that meet three criteria: The work has to be intellectually challenging, the work must be meaningful, and the clients must be those he genuinely wants to work with. “That’s what I enjoy a great deal,” he says. “It engages me on an intellectual level, it’s satisfying in terms of doing something that makes a difference, and it satisfies me on an emotional level, with friendships and the enjoyment of working with good people.”

The Packaging & Processing Hall of Fame will welcome all five new members as its 45th class at PACK EXPO International 2018 (Oct. 14–17 ; McCormick Place, Chicago), according to Hall of Fame coordinator and show producer PMMI, The Association for Packaging and Processing Technologies. This year’s other inductees are Keith Pearson, Michael Okoroafor, Susan Selke and Chuck Yuska. Read short snippets about the other inductees by clicking here, or read their full profiles, to be posted individually in coming days, by clicking here

Senior Editor

          TuxMachines: today's howtos      Cache   Translate Page      

read more

          TuxMachines: Linux Foundation and Kernel Events, Developments      Cache   Translate Page      
  • Top 10 Reasons to Join the Premier European Open Source Event of the Year [Ed: LF advertises this event where Microsoft is Diamond sponsor (highest level). LF is thoroughly compromised, controlled by Linux's opposition.]
  • AT&T Spark conference panel highlights open source road map and needs [Ed: Linux Foundation working for/with a surveillance company]

    The telecommunications industry has been around for 141 years, but the past five have been the most disruptive, according to the Linux Foundation's Arpit Joshipura.

    Joshipura, general manager, networking and orchestration, said on a panel during Monday's AT&T Spark conference in San Francisco that the next five years will be marked by deployment phases across open source communities and the industry as a whole.

    "Its (telecommunications) been disrupted in just the last five years and the speed of innovation has skyrocketed in just the last five years since open source came out," Joshipura said.

  • A Hitchhiker’s Guide to Deploying Hyperledger Fabric on Kubernetes

    Deploying a multi-component system like Hyperledger Fabric to production is challenging. Join us Wednesday, September 26, 2018 9:00 a.m. Pacific for an introductory webinar, presented by Alejandro (Sasha) Vicente Grabovetsky and Nicola Paoli of AID:Tech.

  • IDA: simplifying the complex task of allocating integers

    It is common for kernel code to generate unique integers for identifiers. When one plugs in a flash drive, it will show up as /dev/sdN; that N (a letter derived from a number) must be generated in the kernel, and it should not already be in use for another drive or unpleasant things will happen. One might think that generating such numbers would not be a difficult task, but that turns out not to be the case, especially in situations where many numbers must be tracked. The IDA (for "ID allocator", perhaps) API exists to handle this specialized task. In past kernels, it has managed to make the process of getting an unused number surprisingly complex; the 4.19 kernel has a new IDA API that simplifies things considerably.

    Why would the management of unique integer IDs be complex? It comes down to the usual problems of scalability and concurrency. The IDA code must be able to track potentially large numbers of identifiers in an efficient way; in particular, it must be able to find a free identifier within a given range quickly. In practice, that means using a radix tree (or, soon, an XArray) to track allocations. Managing such a data structure requires allocating memory, which may be difficult to do in the context where the ID is required. Concurrency must also be managed, in that two threads allocating or freeing IDs in the same space should not step on each other's toes.

read more

          FreeBSD Security Advisory - FreeBSD-SA-18:12.elf      Cache   Translate Page      
FreeBSD Security Advisory - Insufficient validation was performed in the ELF header parser, and malformed or otherwise invalid ELF binaries were not rejected as they should be. Execution of a malicious ELF binary may result in a kernel crash or may disclose kernel memory.
          Linux dmesg Arbitrary Kernel Read      Cache   Translate Page      
Linux suffers from an arbitrary kernel read into dmesg via a missing address check in the segfault handler.
          Palm Kernel Oil And Coconut Oil Based Natural Fatty Acids Market Key Players by 2024: Wilmar International, Musim Mas Holdings, KLK Oleo, P&G, Kao Corporation, Pacific Oleochemicals, Emery Oleochemicals, OLEON, United Coconut Chemicals, Chemical Associate      Cache   Translate Page      
Palm Kernel Oil And Coconut Oil Based Natural Fatty Acids Market Key Players by 2024: Wilmar International, Musim Mas Holdings, KLK Oleo, P&G, Kao Corporation, Pacific Oleochemicals, Emery Oleochemicals, OLEON, United Coconut Chemicals, Chemical Associate Palm kernel oil and coconut oil based natural fatty acids market is poised to surpass a revenue of USD 8 billion with a substantial annual growth rate of 4.5% over the period of 2016-2024. Global surfactant demand for detergents used in

          Episode 58: #58: You Will Know Dave’s Vacation By the Trail of Destruction      Cache   Translate Page      

This week, Dave and Gunnar talk about Barthelona, Gaudi, Toledo, Detroit, the Stasi, and why cloud providers can’t have nice things.


Cutting Room Floor

We Give Thanks

          Episode 50: #50: Trenton on Notice      Cache   Translate Page      

This week, Dave and Gunnar talk about: procurement disasters past, present, and future, cloud arbitrage, Bannana Slugs and storage, and the Large Hadron Collider, homesteading on the cloud.


Cutting Room Floor

We Give Thanks

          Red Hat Summit 2014: Linux Kernel Internals with Linda Wang      Cache   Translate Page      

Dave talked with Linda Wang, Senior Software Engineering Manager with Red Hat. Her team handles memory management, scheduling, kdump, and all that gnarly low-level stuff. Learn how kpatch was created because of a missed conference call, and get some tips from Linda on how to participate in kernel development.


          Episode 34: #34: Velociraptor      Cache   Translate Page      

This week on Dave and Gunnar: Oracle plays with science, Amazon plays with the US Postal Service, and everyone plays with tracking you like a criminal.

RSS Icon#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000 Subscribe via RSS or iTunes.


Cutting Room Floor

We Give Thanks

          Episode 30: #30: Sequestration and Subscriptions      Cache   Translate Page      

This week, Dave and Gunnar talk about Vulcan death grips, death from above, and the death of the open source business model.

RSS Icon#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000 Subscribe via RSS or iTunes.

A Hill Staffer checks his phone as Capitol Hill police take aim on the grounds of the Capitol.A Hill Staffer checks his phone as Capitol Hill police take aim on the grounds of the Capitol.

Cutting Room Floor

We Give Thanks

          Episode 17: #17: An Internet of *My* Things      Cache   Translate Page      

This week, Dave and Gunnar talk about: Evolution of evolution, skeumorphism (again), Onion Pi, Brick Pi, Red Hat Summit 2013, an interview with Nirmal Mehta and some lessons learned from Google and NSA.

RSS Icon#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000 Subscribe via RSS or iTunes.

Nirmal big and small.Nirmal big and small.

So this is an enormous episode, the longest we’ve done yet. If you just want to hear from Nirmal, jump to 1:00:00. Yes, an hour in. This episode is epic.

Cutting Room Floor

We Give Thanks

  • Glyn Moody making our day by letting us know about CalDAV’s and CardDAV’s return to Google
  • Gunnar’s sister Wendy, for her Guinea Pig enthusiasm.
          Dagaz: Из тумана      Cache   Translate Page      
image#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000Все это плутни королевы Маб.
Она в конюшнях гривы заплетает
И волосы сбивает колтуном…

Вильям Шекспир

Это был долгий релиз, но и сделано было немало. Появился session-manager, позволяющий откатывать ошибочно сделанные ходы. Кое где добавилось звуковое оформление. А ещё, я придумал прикольный способ, позволяющий затолкнуть несколько альтернативных вариантов начальной расстановки в одну игру. И самое главное — я наконец добрался до игр с неполной информацией.
Читать дальше →
          [Mac 포럼] 120트랙 이상 오케스트라 제작 타워맥 세팅, 작곡 믹싱시 로직 오디오 세팅 질문드려요      Cache   Translate Page      

안녕하세요 큐오넷 선생님들,

120트랙 이상 오케스트라 제작 타워맥 세팅, 작곡 믹싱시 로직 오디오 세팅 질문드려요 


전에 쓰던 기본형 연탄맥이 발열과 유지의 문제가 많아서 안정적이고 확장성이 좋은 타워맥으로 넘어가려합니다.

고려중인 스펙은 제온 W 12코어 3.4GH, 램 128, SSD 2TB, AMD ATI-HD7950, WD BLACK 4TBx4, UAD PCIe OCTO, USB 3.0 Card 입니다.


오디오 인터페이스는 UAD Apollo8에 Symphony를 워드에 걸어서 Apollo는 컨트롤러, Symphony의 AD/DA를 사용해 왔었구요,


Thunderbolt가 없는 타워맥에 안정성과 속도가 좋은 RME UFX+를 Apollo8을 대체해 Symphony와 연결해 사용할 예정입니다.




약 지난 3년간, 성능문제인지 제가 세팅을 잘 못해서인지 발열때문인지몰라도 항상 거의 Kernel_Task가 반 이상은 CPU와 RAM을 잡아먹고,

트랙들을 거의 Kontakt 으로 돌렸다보니 (Spitfire, 8Dio 등) RAM 소모량의 문제가 심해서 작업이 어려웠습니다.



제가 궁금한것은, 미디 작곡시에 거의 리얼타임(빠른)의 반응속도로 연주해 건반으로 녹음을 하고,


믹스시에는 120트랙이 넘는 트랙들이 끊기거나 튕기거나 심한 발열없이 돌리는것을 원하는데 위의 제 계획대로 고성능의 타워맥과 주변장비들로 그것이 가능한지 입니다.



버퍼나 샘플레이트는 안정성을 위해 항상 미디움에 128~믹스 마스터시엔 1080(정확한 수치가 기억이 안납니다) 까지 올려 사용했었습니다.


Fan Control, Mouse Speed/Accelation Controller App, CPU/Memory Analyzer의 앱들을 추가로 사용했었는데 이것이 성능을 저하시키는것같아(특히 Fan Control) 


전부 삭제 해 보았지만 큰 변화는 없었습니다. 고로 하드웨어 문제라는 결론을 짓고 시스템 변경이라는 결정을 내렸습니다.



제가 잘 모르는 팁이나 세팅, 장비에 대한 지식이 있다면 망설이지말고 얘기해 주세요.


큰 프로젝트를 한달 앞두고 너무나 스트레스를 받고있습니다. 여러분들의 조그만 손길이라도 꼭 좀 도와 주셨으면 합니다.



어떠한 답변이든 답변에 참여해주신분들께 모두 미리 감사드립니다.


좋은 저녁 되십시오.

          Commenti su [Scena PS4] Rilasciato PS4_db_rebuilder v0.1 di Jhonny82      Cache   Translate Page      
È il kernel panic e fa parte della percentuale di riuscita dell exploit che non è il 100% Normale
          BlackBerry to spark ultra-secure hyperconnectivity with new EoT platform      Cache   Translate Page      
BlackBerry has unveiled BlackBerry Spark, the only Enterprise of Things (EoT) platform designed and built for ultra-secure hyperconnectivity from the kernel to the edge. Defined as the interconnectedness of people, organizations, and machines, hyperconnectivity is set to revolutionize the way ... Reported by DNA 40 minutes ago.
          Software Engineer - VMware - Palo Alto, CA      Cache   Translate Page      
1 years of experience in Intel and AMD x86 based processor architecture. 1 years of experience in OS kernel internals, including memory management, resource...
From VMware - Wed, 25 Jul 2018 00:20:47 GMT - View all Palo Alto, CA jobs
          KeyScrambler      Cache   Translate Page      

KeyScrambler encrypts your keystrokes deep in the kernel, as they travel from your keyboard to the destination app, so whatever keyloggers may be awaiting in the operating system will get only scrambled, indecipherable, useless data to record. This preventive approach enables KeyScrambler to stay one step ahead of the bad guys instead of running after them. It protects your data/identity even on security compromised computers, defeats both known and unknown keyloggers, and effectively closes the gap in traditional anti-virus, anti-malware programs, whose detect-and-remove method proves ineffective in dealing with new malware attacks.

Thanks to Astron for the update.


          Dagaz: Из тумана      Cache   Translate Page      
image#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000Все это плутни королевы Маб.
Она в конюшнях гривы заплетает
И волосы сбивает колтуном…

Вильям Шекспир

Это был долгий релиз, но и сделано было немало. Появился session-manager, позволяющий откатывать ошибочно сделанные ходы. Кое где добавилось звуковое оформление. А ещё, я придумал прикольный способ, позволяющий затолкнуть несколько альтернативных вариантов начальной расстановки в одну игру. И самое главное — я наконец добрался до игр с неполной информацией.
Читать дальше →
          On flexible Green function methods for atomistic/continuum coupling      Cache   Translate Page      
Atomistic/continuum (A/C) coupling schemes have been developed during the past twenty years to overcome the vast computational cost of fully atomistic models, but have not yet reached full maturity to address many problems of practical interest. This work is therefore devoted to the development and analysis of flexible Green function methods for A/C coupling. Thereby, the Green function of the harmonic crystal is computed a priori and subsequently employed during the simulation of a fully nonlinear atomistic problem to update its boundary conditions on-the-fly, based on the motion of the atoms and without the need of an explicit numerical discretization of the bulk material. The first part is devoted to the construction of a discrete boundary element method (DBEM) which bears several advantages over its continuous analog, i.a. nonsingular Green kernels and direct application to nonlocal elasticity. As is well-known from integral problems, the DBEM leads to dense system matrices which become quickly unfeasible due to their quadratic complexity. To overcome this computational burden, an implicit approximate representation using hierarchical matrices is proposed which have proven their efficiency in the context of boundary integral equations while preserving overall accuracy. In order to solve the coupled atomistic/DBEM problem, several staggered and monolithic solution procedures are assessed. An improvement of the overall accuracy by several orders of magnitude is found in comparison with naive clamped boundary conditions. To further account for plasticity in the continuum domain the coupled atomistic/discrete dislocations (CADD) method is examined, including the treatment of hybrid dislocation lines that span between the two domains. In particular, a detailed derivation of a quasi-static problem formulation is covered and a general algorithm to simulate the motion of the hybrid dislocations along A/C interfaces is presented. In addition, to avoid solving the complementary elasticity problem, a simplified solution procedure, which updates the boundary conditions based on the Green function of the entire dislocation network for obtaining accurate stress and displacement fields, is introduced and validated. The test problem consists of the bowout of a single dislocation in a semi-periodic box under an applied shear stress, and excellent results are obtained in comparison to fully-atomistic solutions of the same problem.
          [Scena PS Vita] Rilasciato VitaShell 1.96      Cache   Translate Page      

Il developer TheFlow in attesa di poter studiare attentamente il kernel della PlayStation 4, ha rilasciato un nuovo aggiornamento di VitaShell, il File Manager per PlayStation Vita e PlayStation TV. VitaShell è la più valida alternativa alla LiveArea della PlayStation Vita e PlayStation TV, integra un utilissimo file manager, un programma per l’installazione dei pacchetti VPK, un...

L'articolo [Scena PS Vita] Rilasciato VitaShell 1.96 proviene da BiteYourConsole.

          Red Hat dev questions why older Linux kernels are patched quietly      Cache   Translate Page      

itwire: A Linux developer who works for the biggest open source vendor Red Hat has questioned why security holes in older Linux kernels -  those that are listed as having long-term support - are being quietly patched

          Xinyang Cyril Food Technology Co., Ltd      Cache   Translate Page      
 Our company has been in the Acorn Processing for more than 10 years. We focus on producing and exporting the highest quality Acorn Starch, Acorn Kernel, Water Chestnut Starch and Green Manure Seeds in the world. We supply to some of the biggest food companies and these customers expect the highest quality conforming to global standards.   Our company adopted advanced Korean technology, equipment and quality testing methods and established a whole set of quality management system. At present, we have established close business with Korea, Japan and other countries. Collectively, we form a team that delivers a product quickly and efficiently with quality and excellence.   For more information, please call or email us anytime.
          Jinzhou Qiaopai Group Co., Ltd.      Cache   Translate Page      
Our company is the professional manufacturer of agro processing machineries, such as sunflower seed dehulling machine (ie. Sunflower seed dehuller, sunflower seed huller, sunflower seed sheller, sunflower seed hulling machine, sunflower seed shelling machine), pumpkin seed dehuller, buckwheat cleaning sizing dehulling and separating machine, oats hulling machine, in-shell almond processing line, peeling and separating machine for mung bean or mung bean peeler, the cleaning machine for bean, the cleaning and separating machine for all kinds of oilseed, kernel and grain, roaster for grains, etc. and also is the largest supplier of sunflower kernels, buckwheat kernels, roasted buckwheat kernels, pumpkin seed and pumpkin kernels in China. Our products list as follows: 1. Sunflower Seed Dehulling Machine or sunflower seed dehuller 2. Pumpkin Seed hulling machine or Pumpkin seed dehulling machine or pumpkin seed dehuller 3. Peanut cleaning and dehulling machine 4. Watermelon seed dehulling machine 5. Mung bean decorticating machine or mung bean peeling machine 6. Almonds dehuller 7. Buckwheat hulling machine or buckwheat huller or buckwheat dehuller 8. Oat huller 9. Hempseed dehuller 10. The cleaning and grading machine for buckwheat 10. Semi-automatic packing machine 11. The cleaning machine for beans 12. Drier 13. Roaster for grains 14. Cleaner for all kinds oilseeds, kernels, grains and seeds. 15. Tapioca starch and starch 16. Hulled buckwheat kernels and roasted buckwheat kernels Main Products: Sunflower seed dehulling machine or dehuller (i.e.huller,sheller), pumpkin seed dehulling machine or dehuller, buckwheat dehulling machine, almond dehulling machine or dehuller, sunflower seed kernels, peanut dehuller, cleaner for all kinds of oilseeds, kernels, grains and seeds, semi-automatic selecting chair, roaster, packing machine, etc. Qiaopai Group will always follow the quality and service policy: adhere to quality criteria, stick to scientific and technical innovation, provide excellent service, pursue the customer''s satisfaction.
          Pink Sugar Type Whipped Body Butter, Goat Milk, Shea and Cocoa Butter With Vitamin C, Handmade by GaGirlNaturals      Cache   Translate Page      

17.75 USD

Pink Sugar Type Whipped Body Butter, Goat Milk, Shea, Cocoa Butters With Vitamin C, Handmade, 8 Oz.

Compare to Pink Sugar Type.

Pink Sugar Type fragrance is a heavenly sweet scent that blends fruity essences of orange, bergamot, raspberry, and red berries with fig leaf with a note of a note of spun sugar creating a fun mixture that’s sure to make people smile. It is reminiscent of cotton candy and carnival rides of bygone days.

Want a handmade body butter that is skin softening and moisturizing and also has many wonderful benefits for your skin? Then, you have found a body butter for you! My handmade goat milk based whipped body butter is a luxurious treat for your skin. Made with goat milk, vitamin C, shea and cocoa butters, coconut, apricot kernel, avocado, olive, grape seed, pomegranate seed, argan, rose hip seed, and vitamin E oils to nourish your skin and provide many skin loving benefits.

Goat milk contains alpha-hydroxy acids that help to exfoliate dry, dead skin cells, contains probiotics which helps protect the skin from ultra violet light, contains high amounts of protein, fat, iron, vitamin A, B6, B12, C, D, E, and many more. These vitamins and minerals help slow down aging, help the skin rebuild, add elasticity, and help retain skin moisture, is readily absorbed into the skin, and very moisturizing. Vitamin C helps build collagen, protects against ultra violet rays, and contains antioxidants. Rose hip seed oil contains retinoic acid, a natural form of vitamin A. Pomegranate Seed oil is powerfully antioxidant and anti-inflammatory; and it is known to significantly boost epidermal cellular regeneration. Avocado, apricot kernel, olive, grape seed and coconut oils are readily absorbed into the skin, and very moisturizing.

My handmade products are made to order fresh for you.

Phthalate free
Propylene glycol free
Paraben free
Gluten free
Cruelty free

Ingredients: Distilled water, sunflower oil, soya oil, vegetable glycerin, potassium sorbate, meadow foam oil, jojoba oil, goat's milk, aloe vera, vitamin c, shea butter, cocoa butter, coconut oil, pomegranate seed oil, olive oil, apricot kernel oil, avocado oil, grape seed oil, argan oil, rose hip seed oil, vitamin E oil, stearic acid, cetyl alcohol, emulsifying wax, palmitic acid

** Please note that cosmetic products like soaps and lotions can begin to melt if left in high temperatures. Please note your tracking information and try to be available to receive your package promptly.**

My Credentials:

I have a Master Cosmetology License and a Certificate
In Natural Health and Healing.

[*Type*] - Name trademarks and copyrights are properties of their respective manufacturers and/or designers. These versions are NOT to be confused with the originals and GaGirlNaturals has no affiliation with the manufacturers/designers. This description is to give the customer an idea of scent character, not to mislead, confuse the customer or infringe on the manufacturers/designer's name and valuable trademark.

**Customer Reviews**

Smells just wonderful. Quick shipping also. Thanks!

          September Patch Tuesday: Windows Fixes ALPC Elevation of Privilege, Remote Code ...      Cache   Translate Page      

September’s Patch Tuesday provides a security patch for CVE-2018-8440 , an elevation of privilege vulnerability that occurs when windows incorrectly handles calls to the Advanced Local Procedure Call (ALPC) interface. This bug allows threat actors to run code with administrative privileges, install programs, or even create new accounts with full user rights. This bug’s source code has been publicly disclosed as of August 27 via Twitter and has been seen actively used in malicious campaigns as early as September 5.

This month’s Patch Tuesday includes 61 CVEs from Windows, eleven of which came through from Trend Micro’s Zero Day Initiative . Of the listed vulnerabilities, 17 were rated as Critical, 43 as Important, and one as Moderate.

CVE-2018-8475 , a critical Windows remote code execution vulnerability, was also patched this month. This bug allows threat actors to execute code just by making someone view an image with malicious code. This bug, which is easily exploitable , will likely be seen as an exploit in the wild soon.

This month’s Patch Tuesday also addresses two Adobe updates encompassing 10 CVEs. The first update fixes an information disclosure vulnerability for Windows Flash Player , while the second addresses several code execution and information disclosure bugs in ColdFusion .

Trend Micro Deep Security and Vulnerability Protection protect user systems from any threats that may target thevulnerabilities addressed in this month’s round of updates via the following DPI rules:

1009270-Microsoft Windows Task Scheduler ALPC Privilege Escalation Vulnerability (CVE-2018-8440) 1009276-Microsoft Edge Chakra Scripting Engine Memory Corruption Vulnerability (CVE-2018-8367) 1009277-Microsoft Edge Chakra Scripting Engine Memory Corruption Vulnerability (CVE-2018-8391) 1009290-Microsoft Windows Multiple Security Vulnerabilities (Sep-2018) 1009279-Microsoft Windows MSXML Remote Code Execution Vulnerability (CVE-2018-8420) 1009280-Microsoft Windows Kernel Information Disclosure Vulnerability (CVE-2018-8442) 1009281-Microsoft Internet Explorer Memory Corruption Vulnerability (CVE-2018-8447) 1009290-Microsoft Windows Multiple Security Vulnerabilities (Sep-2018) 1009283-Microsoft Edge Scripting Engine Memory Corruption Vulnerability (CVE-2018-8456) 1009284-Microsoft Edge Scripting Engine Memory Corruption Vulnerability (CVE-2018-8459) 1009285-Microsoft Internet Explorer Memory Corruption Vulnerability (CVE-2018-8461) 1009286-Microsoft Edge PDF Remote Code Execution Vulnerability (CVE-2018-8464) 1009287-Microsoft Edge Chakra Scripting Engine Memory Corruption Vulnerability (CVE-2018-8466) 1009288-Microsoft Edge Chakra Scripting Engine Memory Corruption Vulnerability (CVE-2018-8467) 1009289-Microsoft Internet Explorer Security Feature Bypass Vulnerability (CVE-2018-8470) 1009293-Microsoft Windows Remote Code Execution Vulnerability (CVE-2018-8475)

Trend Micro TippingPointcustomers are protected from threats that may exploit this month’s list of vulnerabilities via theseMainlineDVfilters:

32922: HTTP: Microsoft Edge Chakra Memory Corruption Vulnerability 32923: HTTP: Microsoft Edge Scripting Engine Memory Corruption Vulnerability 32924: HTTP: Microsoft Internet Explorer Use-After-Free Vulnerability 32903: HTTP: Microsoft Windows ALPC Privilege Escalation Vulnerability 32936: HTTP: Microsoft NT Kernel driver API Information Disclosure Vulnerability 32236: HTTP: Microsoft Internet Explorer insertRow Memory Corruption Vulnerability 32937: HTTP: Microsoft Edge defineProperty Type Confusion Vulnerability 32929: HTTP: Internet Explorer onresize Memory Corruption Vulnerability 32925: HTTP: Microsoft Edge PDF Parser Memory Corruption Vulnerability 32927: HTTP: Microsoft Edge Chakra Type Confusion Vulnerability 32928: HTTP: Microsoft Edge Chakra Type Confusion Vulnerability 33055: HTTP: Microsoft Windows TIFF Parsing Buffer Overflow Vulnerability
          Kernel exploit discovered in macOS Webroot SecureAnywhere antivirus software      Cache   Translate Page      
The severe memory corruption flaw permitted attackers to execute malware at the kernel level.
          IoT Time: M2M/Internet of Things weekly digest      Cache   Translate Page      
*Mobile TeleSystems (MTS)* has claimed a Russian first with the launch of services over its NB-IoT network in 20 cities across the Federation including Moscow, St. Petersburg, Nizhny Novgorod, Novosibirsk, Kazan and Vladivostok. It plans to offer NB-IoT coverage in all large cities nationwide by the end of 2018. The NB-IoT infrastructure and service rollout is supported by technology solutions from *Huawei*, *Nokia*, *Ericsson*, *Samsung* and *Cisco*, with MTS targeting sectors such as logistics/transport, energy/utilities, mining, manufacturing, retail, health, smart cities and smart housing. The mobile operator highlighted that the NB-IoT network will significantly reduce costs of implementing IoT/M2M solutions due to advantages over existing M2M standards, including increased network capacity, high radio-sensitivity, long service/battery life and low IoT module cost. In another pan-Russia announcement, fixed network operator *ER-Telecom* disclosed that its LoRaWAN-based industrial IoT network has expanded services to 52 cities, stretching from the Pacific port of Vladivostok to the European exclave of Kaliningrad. ER-Telecom – operating under the brands and Business – has partnered platform provider *Actility* for the rollout, targeting 60 cities by the end of this year. IoT solutions on the LoRa network are aimed at the energy sector as well as smart city applications, housing/communal services, other urban services and transport companies among others, offering tailored solutions for ER’s business clients. In the US, *AT&T* is narrowing down the launch window for its upcoming NB-IoT network, which will augment its existing low power, wide area (LPWA) IoT network based on LTE-M technology. AT&T now aims to launch NB-IoT connectivity services in the second quarter of 2019, Mobile World Live reports. Chris Penrose, president of IoT Solutions at AT&T, noted that cellular LPWA IoT services are finally starting to live up to the hype surrounding their early rollouts, saying that ‘use cases are taking off,’ and adding: ‘It’s about how we now truly scale this industry.’ *Vodafone Group* is doubling the number of LTE cell sites supporting the NB-IoT standard, with its existing NB-IoT networks in Germany, Italy, the Netherlands, the Czech Republic, Greece, Ireland, Spain, Turkey, South Africa and Australia to be densified while new networks will be commercially launched in the UK, Romania and Hungary, TechRadar reports. Vodafone – which claims around 74 million M2M/IoT connections worldwide – says there is strong enterprise demand for NB-IoT-powered applications, with its IoT director Stefano Gastaut stating: ‘NB-IoT gives businesses access to 5G capabilities a year before we expect large-scale consumer availability and I believe this will be a catalyst in the widespread use of IoT by enterprises.’ Indian IoT specialist *Unlimit*, part of the *Reliance Group*, has teamed up with state-owned telco *Bharat Sanchar Nigam Limited (BSNL)* to offer IoT services to enterprise customers across India in sectors including automobile, digital manufacturing, logistics, transportation, public sector enterprises and agriculture. Unlimit will leverage BSNL’s pan-India cellular network to provide solutions including IoT device management, managed connectivity, application software and advanced analytics. Jurgen Hase, CEO at Unlimit, said: ‘With the addition of BSNL’s connectivity, we are further expanding our services and capabilities to help scale essential IoT projects in India and contribute significantly in the digitisation of the rural society.’ Anupam Shrivastava, BSNL chairman, added: ‘By combining our pan-India coverage, last mile network access, and bandwidth with Unlimit’s range of services, we will help enterprises accelerate the pace of their new innovations and fast-track the digital transformation process.’ *BlackBerry* has unveiled ‘BlackBerry Spark’, described as the only Enterprise of Things (EoT) platform designed for ultra-secure hyperconnectivity from the kernel to the edge. BlackBerry Spark is aimed at manufacturers of complex ‘things’ such as autonomous vehicles and industrial equipment with the highest levels of security and safety-certification, as well as consumer-friendly interfaces to complex processes and AI, such as voice-activated speakers with built-in privacy protection. The platform is also designed for enterprises to leverage AI and manage smart 'things' regardless of operating system (Android, iOS, Linux, QNX, Windows) as well as snap-in existing platform services such as Android Things, AWS, Azure and Watson. BlackBerry also claims the Spark platform will make ‘military-grade’ security easy and intuitive for end users. In another launch with eyes firmly on security, US operator *Sprint*, in partnership with its Japanese parent *SoftBank Group*’s member companies *Packet* and *Arm*, has unveiled the ‘Curiosity IoT’ platform, enabling enterprises to manage IoT devices and connectivity over the air across multiple SIM profiles. Sprint says that via Curiosity IoT ‘intelligence from device data will be generated instantly through the dedicated, distributed and virtualized core, built together with the new operating system. And the ultimate level of security will be provided from the chip to the cloud.’ Lastly, *Cubic Telecom* has launched its Global Connectivity Management Platform offered ‘as-a-Service’ to IoT device manufacturers worldwide, with the first devices hosted on the platform already being managed in Europe, the USA and South Korea. Cubic says its unique platform solution brings standards-based remote SIM provisioning and cloud-native advanced connectivity management functions such as zero-touch device registration and connectivity activation across different mobile networks, regions and regulatory conditions. Cubic underlines that its IoT platform has ‘entirely new integration capabilities, making it easy for device manufacturers from an array of different industries to deliver a truly global connectivity solution to end users through one global SIM and one platform … across all regions, with complete freedom of vendor choice.’ We welcome your feedback about *IoT Time*. If you have any questions, suggestions or corrections, please email **.
          Re: [PATCH] selinux: Add __GFP_NOWARN to allocation at str_read()      Cache   Translate Page      
Dmitry Vyukov writes: (Summary) On Thu, Sep 13, 2018 at 2:55 PM, peter enderborg
<> wrote:
And some of the calls are GFP_ATOMC.
Then another option is to introduce reasonable application-specific limit and not rely on kmalloc-anything at all. Another advantage is that what works on one version of kernel will continue to work on another version of kernel. Today it's possible that a policy works on one kernel with 4MB kmalloc limit, but breaks on another with 2MB limit. Ideally exact value of KMALLOC_MAX_SIZE does not affect anything in user-space.
anything in user-space.
anything in user-space.

          RE: [PATCH v2] efi: take size of partition entry from GPT header      Cache   Translate Page      
David Laight writes: (Summary) From: Karel Zak
I suspect you also need a sanity check that the value isn't too small or stupidly large.
or stupidly large.
In principle slightly short lengths presumably imply that the disk was formatted with an older standard - so the last fields should be ignored.
They may not be any such disks - until the on-disk structure is extended and the kernel structure updated to match.
and the kernel structure updated to match.

          [PATCH 19/48] perf tools: Add thread::exited flag      Cache   Translate Page      
Jiri Olsa writes: (Summary) Adding thread::exited to indicate the thread has exited, and keeping the thread::dead flag to indicate thread is on the dead list.
is on the dead list.
Link: Signed-off-by: Jiri Olsa <>
--- tools/perf/util/machine.c | 5 +++-- 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c index 999f200f24e7..5ae2baba27ca 100644 --- a/tools/perf/util/machine.c +++ b/tools/perf/util/machine.c @@ -1828,6 +1828,9 @@ static void __machine__remove_thread(struct machine *machine, struct thread *th, rb_erase_init(&th->rb_node, &threads->entries);
          [PATCH 27/48] perf tools: Use map_groups__find_addr_by_time()      Cache   Translate Page      
Jiri Olsa writes: (Summary) + if (al->map == NULL) { + /* + * If this is outside of all known maps, and is a negative + * address, try to look it up in the kernel dso, as it might be + * a vsyscall or vdso (which executes in user-mode). + } + } else { + /* + * Kernel maps might be changed when loading symbols so loading + * must be done prior to using kernel maps.
          [PATCH 31/48] tools lib fd array: Introduce fdarray__add_clone fun ...      Cache   Translate Page      
Jiri Olsa writes: (Summary) Adding fdarray__add_clone to be able to copy/clone
a specific entry from fdarray struct.
a specific entry from fdarray struct.
It will be useful when separating event maps for
specific threads.
specific threads.
Link: Signed-off-by: Jiri Olsa <>
--- tools/lib/api/fd/array.c | +} + int fdarray__filter(struct fdarray *fda, short revents, void (*entry_destructor)(struct fdarray *fda, int fd, void *arg), void *arg) diff --git a/tools/lib/api/fd/array.h b/tools/lib/api/fd/array.h index b39557d1a88f..06e89d099b1e 100644 --- a/tools/lib/api/fd/array.h +++ b/tools/lib/api/fd/array.h @@ -34,6 +34,7 @@ struct fdarray *fdarray__new(int nr_alloc, int nr_autogrow);
          [PATCH 36/48] perf tools: Add perf_mmap__read_tail function      Cache   Translate Page      
Jiri Olsa writes: (Summary) It will be used in following patches.
It will be used in following patches.
Link: Signed-off-by: Jiri Olsa <>
--- tools/perf/util/mmap.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/tools/perf/util/mmap.h b/tools/perf/util/mmap.h index bad05b12b9df..eb39d3f85b93 100644 --- a/tools/perf/util/mmap.h +++ b/tools/perf/util/mmap.h @@ -79,6 +79,13 @@ static inline u64 perf_mmap__read_head(struct perf_mmap *mm) return head;
          [PATCH 46/48] perf record: Add maps to --thread-stats output      Cache   Translate Page      
Jiri Olsa writes: (Summary) Display free size of thread's memory maps as part
of --thread-stats output.
of --thread-stats output.
$ perf --debug threads=2 record ...
pid write poll skip maps (size 20K) 1s 8914 136B 1 0 19K 19K 19K 19K 2s 8914 512K 43 79 19K 19K 17K 19K 3s 8914 3M 214 385 17K 16K 16K 17K 4s 8914 3M 121 291 17K 17K 18K 18K ...
Link: Signed-off-by: Jiri Olsa <>
--- tools/perf/builtin-record.c | current)) { - fprintf(stderr, "%6s %6s %10s %10s %10s\n", " ", "pid", "write", "poll", "skip");
          [PATCH 22/48] perf tools: Introduce thread__find_symbol_by_time() ...      Cache   Translate Page      
Jiri Olsa writes: (Summary) +} + +struct map *thread__find_map_by_time(struct thread *thread, u8 cpumode, + u64 addr, struct addr_location *al, + u64 timestamp) +{ + struct map_groups *mg; } +struct symbol *thread__find_symbol_by_time(struct thread *thread, u8 cpumode, + u64 addr, struct addr_location *al, + u64 timestamp) +{ + if (perf_has_index) + thread__find_map_by_time(thread, cpumode, addr, al, timestamp); + const u8 cpumodes[] = { + PERF_RECORD_MISC_USER, + PERF_RECORD_MISC_KERNEL, + PERF_RECORD_MISC_GUEST_USER, + PERF_RECORD_MISC_GUEST_KERNEL + };
          Re: [char-misc v4.4.y 2/2] mei: bus: type promotion bug in mei_nfc ...      Cache   Translate Page      
Greg Kroah-Hartman writes: On Tue, Sep 04, 2018 at 01:43:04AM +0300, Tomas Winkler wrote: Cc: <> # 4.4
I also need a version of this patch for 4.18.y, 4.14.y, and 4.9.y before I will consider adding it to 4.4.y, as we do not want anyone to ever get a regression moving to a new kernel.
a regression moving to a new kernel.
greg k-h
greg k-h
greg k-h

          [PATCH] printk: Do not miss new messages when replaying the log      Cache   Translate Page      
Petr Mladek writes: (Summary) The variable "exclusive_console" is used to reply all existing messages on a newly registered console. 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index e30e5023511b..54dfc441a692 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -421,6 +421,7 @@ static u32 log_next_idx; MSG_FORMAT_SYSLOG, text + len, @@ -2415,10 +2423,6 @@ void console_unlock(void) console_locked = 0; - /* Release the exclusive_console once it is used */ - if (unlikely(exclusive_console)) - exclusive_console = NULL;
          Re: 4.19-rc3: bindeb-pkg target is failing on ppc64      Cache   Translate Page      
Ben Hutchings writes: (Summary) On Tue, 2018-09-11 at 15:58 +0100, Rui Salvaterra wrote: anything else in order to help debug this. It sounds like you don't actually have a ppc64 installation, but a powerpc installation with a 64-bit kernel. In that case you will now need to add ppc64 as a secondary architecture by running: need to add ppc64 as a secondary architecture by running: dpkg --add-architecture ppc64
dpkg --add-architecture ppc64
If that doesn't help, please send the error messages you're getting.
          Re: Is it possible to add pid and comm members to the event struct ...      Cache   Translate Page      
Amir Goldstein writes: (Summary) The fanotify API does not support monitoring file deletion events Yes, I am working toward that goal.
Yes, I am working toward that goal.
+ event->tgid = get_pid(task_pid(current));
So if you would like to change that you need to add a new flag to fanotify_init (e.g. FAN_EVENT_INFO_TID)
new applications that would opt-in for the flag will get task_pid() while existing application will keep getting task_tgid() new applications will get -EINVAL when passing FAN_EVENT_INFO_TID to fanotify_init() on an old kernel and they could then fall back to getting tgid in events and be aware of that fact.
tgid in events and be aware of that fact.

          Re: [PATCH net-next 1/3] net: rework SIOCGSTAMP ioctl handling      Cache   Translate Page      
Arnd Bergmann writes: (Summary) wrote: -EXPORT_SYMBOL(sock_get_timestamp);
As I just learned, sparc64 uses a 32-bit suseconds_t, so this function always leaked 32 bits of kernel stack data by copying the padding bytes of 'tv' into user space.
the padding bytes of 'tv' into user space.
Linux-4.11 and higher could avoid that with
have been affected since socket timestamps were first added. The same thing is probably true of many other interfaces that pass a timeval.
that pass a timeval.
My new implementation is worse here: it no longer leaks stack data, but since we now write a big-endian 64-bit microseconds value, the microseconds are in the wrong place and will be interpreted as zero by user space...
be interpreted as zero by user space...
I'll also have to revisit a few other similar patches I did for y2038, to figure out what they should do on sparc64.
          [GIT PULL] pin control fixes for v4.19      Cache   Translate Page      
Linus Walleij writes: (Summary) Please pull them in!
Please pull them in!
Details in the signed tag.
Details in the signed tag.
Linus Walleij
Linus Walleij
Linus Walleij
The following changes since commit 5b394b2ddf0347bef56e50c69a58773c94343ff3: The following changes since commit 5b394b2ddf0347bef56e50c69a58773c94343ff3: Linux 4.19-rc1 (2018-08-26 14:11:59 -0700)
Linux 4.19-rc1 (2018-08-26 14:11:59 -0700)
are available in the Git repository at:
are available in the Git repository at:
git:// tags/pinctrl-v4.19-2
for you to fetch changes up to 5bc5a671b1f4b3aa019264ce970d3683a9ffa761: for you to fetch changes up to 5bc5a671b1f4b3aa019264ce970d3683a9ffa761: pinctrl: madera: Fix possible NULL pointer with pdata config (2018-08-29 14:02:47 +0200)
(2018-08-29 14:02:47 +0200)
---------------------------------------------------------------- Pin control fixes for th
          Re: [PATCH] proc: restrict kernel stack dumps to root      Cache   Translate Page      
Jann Horn writes: (Summary) return 0;) In my mind, this is different because it's a place where we don't have to selectively censor output while preserving parts of it, and it's a place where, as Laura said, it's useful to make lack of privileges clearly visible because that informs users that they may have to retry with more privileges.
with more privileges.
Of course, if you have an example of software that actually breaks due to this, I'll change it. But I looked at the three things in Debian codesearch that seem to use it, and from what I can tell, they all bail out cleanly when the read fails.
bail out cleanly when the read fails.
bail out cleanly when the read fails.

          Regression: kernel 4.14 an later very slow with many ipsec tunnels      Cache   Translate Page      
Wolfgang Walter writes: (Summary) They perform so bad that they are unusable for our router with about 3000 ipsec tunnels (tunnel mode network <-> This test basically copies with scp large files to 60 different remote locations (involving ipsec), limited to 1GBit/s combined, and in paralled I ping from different networks over this router to machines in other networks of this router (no ipsec- tunnels involved).
tunnels involved).
With 4.9 and earlier copying needs about 2 minutes and the pings all remain under 2ms roundtrip.
          Re: [RFC 00/60] Coscheduling for Linux      Cache   Translate Page      
"Jan_H._Schönherr" writes: (Summary) On 09/13/2018 01:15 AM, Nishanth Aravamudan wrote:
This goes away with the change below (which fixes patch 58/60). This goes away with the change below (which fixes patch 58/60). Thanks,
diff --git a/kernel/sched/cosched.c b/kernel/sched/cosched.c index a1f0d3a7b02a..a98ea11ba172 100644 --- a/kernel/sched/cosched.c +++ b/kernel/sched/cosched.c @@ -588,6 +588,7 @@ static void sdrq_update_root(struct sdrq *sdrq) /* Get proper locks */ rq_lock_irqsave(rq, &rf);
          [PATCH v7 0/7] clocksource: rework Atmel TCB timer driver      Cache   Translate Page      
Alexandre Belloni writes: (Summary) They currently are not able to boot a mainline kernel.
- using the PIT doesn't work well with preempt-rt because its interrupt is shared (in particular with the UART and their interrupt flags are incompatible)
- the current solution is wasting some TCB channels - the current solution is wasting some TCB channels The plan is to get this driver upstream, then convert the TCB PWM driver to be able to get rid of the tcb_clksrc driver along with atmel_tclib now that AVR32 is gone.
is gone.
changes in v7:
- fixed a warning when building on 64 bit platforms - fixed a warning when building on 64 bit platforms changes in v6:
- rebased on v4.19-rc1
- separated the clocksource/clockevent and the single clockevent in two different patches
- removed struct tc_clkevt_device and simply use struct atmel_tcb_clksrc - removed struct atmel_tcb_info
- moved tcb_clk_get and tcb_irq_get to users
- moved tcb_clk_get and tcb_irq_get to users
changes in v5
          Cara mudah Unlock Bootloader Xiaomi Pocophone F1 (Beryllium)      Cache   Translate Page      
Cara mudah Unlock Bootloader Xiaomi Pocophone F1

Siapa sangka tahun ini Xiaomi kembali merilis lini smartphone yang fenomenal dengan spesifikasi kelas berat, namun harga jualnya begitu miring! ya itu adalah Xiaomi Pocophone F1 dengan processor Qualcomm Snapdragon 845 yang di jual di harga cuma 4 jutaan saja, benar-benar edan ini smartphone.

Selain banyaknya kelebihan dari device satu ini, faktanya adalah bahwa tetap seri Poco F1 ini berjalan dengan prosedur yang sama seperti device Xiaomi yang lain, untuk itu dalam melakukan Unlock Bootloader juga akan sama saja seperti device Xiaomi yang lain, berikut adalah cara sederhana dan seharusnya yang bisa kalian terapkan untuk melakukan Unlocking pada Xiaomi Pocophone F1.

Persyaratan yang di butuhkan :
1. PC/Laptop
2. Kabel USB
3. Xiaomi Pocophone F1 dengan kapasitas baterai lebih dari 50%
4. Koneksi Internet yang baik
5. Punya Mi Account yang sudah di Login-kan di Smartphone Xioami dan juga di web resmi MIUI (harus sama)

Cara mudah Unlock Bootloader Xiaomi Pocophone F1

# Tahap Request Unlock Bootloader secara resmi

1. Silahkan masuk ke halaman resmi Unlock MIUI
2. Lalu pilih menu "Unlock".
3. Selanjutnya pilih button yang berada di tengah yang bertuliskan "Unlock Now".
4. Kalian akan di bawa ke halaman baru, pada tahap ini kalian mesti memberikan informasi yang benar :
#1 Isikan nama kalian, maksimal 2 kalimat (misal : jhon boni atau teplak meja)
#2 Pilih Regional Indonesia (+62)
#3 Masukan nomor yang bisa di hubungi, wajib! jangan asal dalam mengisi, karena status approve-nya akan di kirim ke nomor yang telah kalian input!
#4 Isi alasan kalian kenapa ingin Unlock Bootloader, ingat! pakai bahasa inggris, lihat triknya disini : Cara mudah agar cepat approve unlock bootloader di device Xiaomi
#5 Centang dan Apply

Note : Proses ini bisa saja akan berlangsung hingga 10 hari lebih dalam tahap Request Unlock Bootloadernya jika kalian tidak beruntung, tetapi jika kalian beruntung maka dalam waktu 1-2 hari biasanya status permohonan Unlock Bootloader kalian sudah di penuhi.

# Tahap persiapan Unlock Bootloader

1. Download dulu aplikasi Mi Flash Unlock terbaru atau di web resminya DISINI.
2. Selanjutnya Assosisikan (Associate) Mi Account kalian dengan cara masuk Settings>>Additional settings>>Developer options, centang opsi OEM Unlocking dan masuk ke menu Mi Unlock Status dan add account and device.
jika tidak tahu caranya ikuti artikel ini : Cara terbaru dan tercepat untuk melakukan unlock bootloader Xiaomi

3. Pastikan device kalian sudah ter-asosiasi dengan sempurna, jika belum jangan coba-coba untuk melakukan Unlock Bootloader, di pastikan akan gagal pada akhirnya.
4. Jika sudah ter-asosiasi dengan sempurna, lanjut ke eksekusi-nya.

# Tahap Unlock Bootloader

1. Lakukan Disable Driver Signature Enforcement terlebih dahulu.
2. Selanjutnya install aplikasi Mi Flash Unlock yang sudah kalian download sebelumnya
3. Buka aplikasi Mi Flash Unlock, lalu Login dengan Mi Account kalian (ingat! harus sama dengan Mi Account yang kalian daftarkan untuk request Unlock Bootloader)
4. Jika sudah, segera matikan smartphone Xiaomi kalian, lalu masuk Mode Fastboot dengan cara menekan tombol Volume Down (-) dan Power berbarengan, tunggu hingga gambar kelinci memperbaiki robot error muncul.
5. Sambung device Xiaomi kalian ke PC dengan kabel USB.
6. Jika telah terhubung, lalu tekan button "Unlock"
7. Proses akan berjalan dengan singkat.
8. Jika sudah berhasil akan muncul pesan "Unlock Succesfully" lalu tekan opsi "Reboot Now"
9. Done!

Jika pada akhirnya kalian gagal dalam melakukan Unlock Bootloader, maka kalian bisa mengikuti tutorial ini : Cara atasi gagal Unlock Bootloader Xiaomi, dan perlu kalian ketahui melakukan unlock Bootloader Xiaomi itu juga memiliki beberapa resiko seperti yang sudah di jelaskan oleh pihak Xiaomi :
Add caption
Tetapi, melakukan Unlock Bootloader juga akan memberikan kalian banyak keuntungan, yang salah satunya adalah kemudahan dalam melakukan Flashing, memasang Custom ROM/Kernel, memasang Custom Recovery dan lainnya.

Terakhir, jangan lupakan terkait waktu binding Unlock Bootloader dari device ini, karena bisa memakan waktu hingga 360 Jam baru bisa di Unlock, cara satu-satunya ya memang kalian harus menunggu hingga waktu binding itu selesai, baru bisa melakukan Unlock Bootloader.

Semoga artikel ini bermanfaat, dan sekali lagi kami tidak bertanggung jawab atas segala kerusakan yang terjadi pada smartphone Xiaomi kalian, jadi lakukan dengan resiko sendiri, semoga berhasil.
          红米高配安卓8.0.x,纯搬运      Cache   Translate Page      
XDA上看到的。好不容易搬下来,ROM OS Version:[/backcolor] 8.x Oreo[/backcolor] ROM Kernel:[/backcolor] [/backcolor]Linux[/backcolor] 3.x[/backcolor] Based On:[/backcolor] Havoc-OS[/backcolor] BUG自测吧。 用这个RE刷入 链接: ...
          Red Hat dev questions why older Linux kernels are patched quietly      Cache   Translate Page      

itwire: A Linux developer who works for the biggest open source vendor Red Hat has questioned why security holes in older Linux kernels -  those that are listed as having long-term support - are being quietly patched

          Brunch Better with Karaoke at The Alibi and Jazz at Perlot      Cache   Translate Page      
by Andrea Damewood

I’m a famous brunch curmudgeon. I won’t go over my reasons, because you already know them. (But for one, long waits for a stupid scramble that I could make better at home for less money.) Grumble, grumble, grumble.

Still, I love a brunch that offers something I can’t get from my cozy kitchen, be it excellent cocktails, international delicacies, or, in the case of two new brunch services from the Alibi and Perlot, excellent musical entertainment.

Perlot, located on NE Fremont with a globally inspired menu from Chef Patrick McKee, has a robust live jazz program at night, and now all weekend at brunch. The Alibi is rivaled only by Chopsticks for absolute Portland karaoke supremacy—and with the addition of benedict and boozy coffee, it’s got my vote for the best.

Alibi Restaurant & Lounge

I’m not familiar with anyone who goes to karaoke bars for the food. This remains the case at the Alibi’s Hawaiian-influenced brunch, but honestly, no matter what you order, you’re in for a good time. Start off with the Hawaiian Coffee ($9); with both 151 and dark rum, it’s guaranteed you’ll start to limber up for pre-noon vocal gymnastics.

I went with a group of five friends on a recent Sunday, and it was as dark as always on the inside, Tiki lights and disco ball glowing at the ungodly hour of 11 am. Menus in one hand and karaoke songbook in the other, we picked our first meal of the day and morning-themed songs to kick things off right. (“Good Morning, Good Morning;” “Breakfast at Tiffany’s;” and “What’s Up” are all excellent options, IMHO.)

We didn’t have to wait long: without plenty of pre-game cocktails and the fact the Alibi’s karaoke brunch is still new, we were the first ones up to sing. While more people arrived as the brunch went on, we got in at least three or four songs each.

Brunch items include coconut shrimp and grits ($12) and a hamburger with bacon and egg ($12). The loco moco ($10), the island favorite of a burger patty over rice with gravy and a fried egg, fell super flat, begging for any hint of salt or hot sauce, and a side of Spam for $3 was too grilled to eat. Best go with the Kalua pork benedict ($12), served on those tasty Hawaiian sweet rolls with grits on the side. And be warned: food arrives piecemeal and over the course of an hour or so, so there’s no point in politely waiting for everyone to be served.

Thanks to a boozy coffee, several $4 mimosas with guava juice, and an enthusiastic, sparse audience, by the end, I was feeling brave enough to tackle songs at noon that midnight-me would never try. Pro tip: Order a $28 bowl with three kinds of rum when you arrive. Slurp that thing down as a group, using ridiculously long plastic straws (sorry, sea turtles)—it’s a pregame bonding ritual that shouldn’t be missed. See you there.

Sundays 11 am-2:30 am, 4024 N Interstate,


With prices from $7 to $15, brunch at Perlot isn’t any more expensive than most Portland spots, but it sure feels way fancier.

There’s a lot of seats to fill in this former Smallwares space (which was also called Southfork before getting rebranded last year), and so walking in at 11 am on a Saturday was a blissfully easy experience. Grab a seat in the back bar area to catch the live jazz and tuck into some above-average brunch classics.

I’m not much for day drinking when I’ve got shit to do, but no matter what, get the peach herb cocktail. It arrives full of peachy summer brightness, and D.L. Franklin vodka balanced with basil and white peach purée, for $8. I managed to just have one and still run errands just fine, thankyouverymuch.

A summer veggie omelet with ricotta ($11) flowed with yummy cheese and late summer bounty, including corn, which I thought was a weird thing to put into an omelet, until the sweet crunchy kernels taught me otherwise. A benedict with smoked butter Hollandaise delivered on its smoky promise, and an heirloom tomato tucked under the braised greens was a welcome touch of sweet acidity in the rich dish ($12). We also split a Belgian waffle ($8), a single fluffy circle served with simple butter and jam.

As the sounds of a piano played in the background, it’s hard to imagine that other people choose to brunch otherwise.

Saturday & Sunday 8:30 am-2:30 pm (jazz from 10 am-1 pm), 4605 NE Fremont,

[ Comment on this story ]

[ Subscribe to the comments on this story ]

          LepideAuditor for File Server (Perpetual Model) for 5 Servers      Cache   Translate Page      
Kernel Sidewise Discount 15%: Discount 449.85 USD
          Linux kernel 信息泄露漏洞(CVE-2018-16658)      Cache   Translate Page      

linux kernel 信息泄露漏洞(CVE-2018-16658)




Linux kernel < 4.18.6


CVE(CAN) ID: CVE-2018-16658

Linux kernel是开源操作系统Linux所使用的内核。

Linux kernel 4.18.6之前版本,drivers/cdrom/cdrom.c文件的‘cdrom_ioctl_drive_status’函数存在信息泄露漏洞。本地攻击者可借助特制的输入利用该漏洞读取内核内存。

















          YubiKey full disk encryption with UEFI secure boot for everyone      Cache   Translate Page      

I've created a full disk encryption setup guide. If you complete this guide, you will have an encrypted root and home partition with YubiKey two factor authentication, an encrypted boot partition and UFEI secure boot enabled. Sounds complicated? No, it isn't!

It took me several days to figure out how to set up a fully encrypted machine with 2FA. This guide should help to get it done in some hours (hopefully). There exists a plenty bunch of tutorials. but none contains a step-by-step guide to get the following things done.

YubiKey encrypted root ( / ) and home ( /home ) folder on separated partitions Encrypted boot ( /boot ) folder on separated partition UEFI Secure boot with self signed boot loader YubiKey authentication for user login and sudo commands Hooks to auto sign the kernel after an upgrade

You should be familiar with linux and you should be able to edit files with vi/vim . You need an USB stick for the Linux Live environment and a second computer would be useful for look ups and to read this guide while preparing your fully encrypted Linux. And of course you will need an YubiKey .

Number Start (sector) End (sector) Size Code Name 1 2048 4095 1024.0 KiB EF02 BIOS boot partition 2 4096 1232895 600.0 MiB EF00 EFI System 3 1232896 2461695 600.0 MiB 8300 Linux filesystem 4 2461696 2000409230 952.7 GiB 8E00 Linux LVM

The disk partitions will look similar like above and the GRUB boot loader will ask you to unlock the boot partition with a password. After that, you will be asked to unlock the root and home partition with a password and your YubiKey device (2FA). The BIOS will be also protected by a password, otherwise UEFI secure boot can be disabled. But even if this is the case, your root and home partition will still be encrypted. This is maximum security.

At the moment there exists only a guide for Arch Linux, but it should be similar for other Linux distributions. If you want to write a guide for Debian/Ubuntu or any other Linux, don't hesitate to open an issue on GitHub or bring your pull request.

If you like this guide, please spread the word, so everyone can use it and don't forget to star this project on GitHub.

          Linux命令技巧之30个必会的命令技巧      Cache   Translate Page      


1. Vim自动添加注释及智能换行 #vi~/.vimrc setautoindent settabstop=4 setshiftwidth=4 functionAddTitle() callsetline(1,"#!/bin/bash") callappend(1,"#====================================================") callappend(2,"#Author:lizhenliang") callappend(3,"#CreateDate:".strftime("%Y-%m-%d")) callappend(4,"#Description:") callappend(5,"#====================================================") endf map<F4>:callAddTitle()<cr>


2. 查找并删除/data这个目录7天前创建的文件 #find/data-ctime+7-execrm-rf{}\; #find/data-ctime+7|xargsrm-rf 3. tar命令压缩排除某个目录 #tarzcvfdata.tar.gz/data--exclude=tmp#--exclude参数为不包含某个目录或文件,后面也可以跟多个 4. 查看tar包存档文件,不解压 #tartfdata.tar.gz#t是列出存档文件目录,f是指定存档文件 5. 使用stat命令查看一个文件的属性


statindex.php Access:2018-05-1002:37:44.169014602-0500 Modify:2018-05-0910:53:14.395999032-0400 Change:2018-05-0910:53:38.855999002-0400 6. 批量解压tar.gz






#ls*.tar.gz|xargs-itarzxvf{} 7. 筛除出文件中的注释和空格










#awk'!/^#|^$/'httpd.conf 8. 筛选/etc/passwd文件中所有的用户




#awk-F":"'{print$1}'/etc/passwd 9. iptables网站跳转




iptables tnat-APOSTROUTING-s[内网IP或网段]-jSNAT--to[公网IP]



iptables tnat-APREROUTING-d[对外IP]-ptcp--dport[对外端口]-jDNAT--to[内网IP:内网端口]


10. iptables将本机80端口转发到本地8080端口 #iptables-tnat-APREROUTING-ptcp--dport80-jREDIRECT--to-ports8080 11. find命令查找文件并复制到/opt目录




#find/etc-namehttpd.conf|xargs-icp{}/opt#-i表示输出的结果由{}代替 12. 查看根目录下大于1G的文件 #find/-size+1024M


13. 查看服务器IP连接数 #netstat-tun|awk'{print$5}'|cut-d:-f1|sort|uniq-c|sort-n -tun:-tu是显示tcp和udp连接,n是以IP地址显示 cut -d:-f1:cut是一个选择性显示一行的内容命令,-d指定:为分隔符,-f1显示分隔符后的第一个字段。 uniq -c:报告或删除文中的重复行,-c在输出行前面加上出现的次数 sort -n:根据不同类型进行排序,默认排序是升序,-r参数改为降序,-n是根据数值的大小进行排序 14. 插入一行到391行,包括特殊符号"/" #sed-i"391s/^/AddTypeapplication\/x-httpd-php.php.html/"httpd.conf 15. 列出nginx日志访问最多的10个IP


#awk'{print$1}'access.log|sort|uniq-c|sort-nr|head-n10 sort :排序 uniq -c:合并重复行,并记录重复次数 sort -nr :按照数字进行降序排序


#awk'{a[$1]++}END{for(vina)printv,a[v]|"sort-k2-nr|head-10"}'access.log 16. 显示nginx日志一天访问量最多的前10位IP #awk'$4>="[16/May/2017:00:00:01"&&$4<="[16/May/2017:23:59:59"'access_test.log|sort|uniq-c|sort-nr|head-n10 #awk'$4>="[16/Oct/2017:00:00:01"&&$4<="[16/Oct/2017:23:59:59"{a[$1]++}END{for(iina){printa[i],i|"sort-k1-nr|head-n10"}}'access.log 17. 获取当前时间前一分钟日志访问量 #date=`date+%d/%b/%Y:%H:%M--date="-1minute"`;awk-vd=$date'$0~d{c++}END{printc}'access.log #date=`date+%d/%b/%Y:%H:%M--date="-1minute"`;awk-vd=$date'$4>="["d":00"&&$4<="["d":59"{c++}END{printc}'access.log #grep`date+%d/%b/%Y:%H:%M--date="-1minute"`access.log|awk'END{printNR}' #start_time=`date+%d/%b/%Y:%H:%M:%S--date="-5minute"`;end_time=`date+%d/%b/%Y:%H:%M:%S`;awk-vstart_time="[$start_time"-vend_time="[$end_time"'$4>=start_time&&$4<=end_time{count++}END{printcount}'access.log 18. 找出1-255之间的整数




#ifconfig|egrep-o'\<([1-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\>' 19. 找出IP地址 #ifconfig|grep-o'[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}'#-o只显示匹配字符 20. 给文档增加开头和结尾说明信息 #awk‘BEGIN{print"开头显示信息"}{print$1,$NF}END{print"结尾显示信息"}'/etc/passwd #awk'BEGIN{printf"dateip\n------------------\n"}{print$3,$4}END{printf"------------------\nend...\n"}'/var/log/messages dateip ------------------ 03:13:01localhost 10:51:45localhost ------------------ end... 21. 查看网络状态命令 #netstat-antp#查看所有网络连接 #netstat-lntp#只查看监听的端口信息 #lsof-ppid#查看进程打开的文件句柄 #lsof-i:80#查看端口被哪个进程占用 22. 生成8位随机字符串






#cat/proc/sys/kernel/random/uuid|cut-c1-8 23. while死循环 whiletrue;do#条件精确等于真,也可以直接用条件["1"=="1"],条件一直为真 done 24. awk格式化输出





#awk'{printf"%15s%10s%20s\n",$1,$2,$3}'test.txt 25. 整数运算保留小数点




#awkBEGIN'{printf"%.2f\n",10/3}' 26. 数字求和 #cata.txt 10 23 53 56 方法1: #!/bin/bash whilereadnum; do sum=`expr$sum+$num` done<a.txt echo$sum




#cata.txt|awk'{sum+=$1}END{printsum}' 27. 判断是否为数字(字符串判断也如此) #[[$num=~^[0-9]+$]]&&echoyes||echono#[[]]比[]更加通用,支持模式匹配=~和字符串比较使用通配符`

^ $:从开始到结束是数字才满足条件


28. 删除换行符并将空格替换别的字符 #cata.txt|xargsecho-n|sed's/[]/|/g'#-n不换行 #cata.txt|tr-d'\n'#删除换行符 29. 查看文本中20至30行内容(总共100行)






#head-30test.txt|tail 30. 文本中两列位置替换 #cata.txt #awk'{print$2"\t"$1}'a.txt


          [$] Toward better handling of hardware vulnerabilities      Cache   Translate Page      

Post Syndicated fromcorbet original

From the kernel development community’s point of view, hardware

vulnerabilities are not much different from the software variety: either

way, there is a bug that must be fixed in software. But hardware vendors

tend to take a different view of things. This divergence has been

reflected in the response to vulnerabilities like Meltdown and Spectre

which was seen by many as being severely mismanaged. A recent discussion

on the Kernel Summit discussion list has shed some more light on how things

went wrong, and what the development community would like to see happen

when the next hardware vulnerability comes around.

          Canonical Outs New Linux Kernel Live Patch for Ubuntu 18.04 LTS and 16.04 LTS      Cache   Translate Page      

Canonical released a new kernel live patch for all of its LTS (Long Term Support) Ubuntu linux releases to address various security vulnerabilities discovered by various security researchers lately.

Coming hot on the heels of the latest Linux kernel security update released by Canonical on Tuesday, the new Linux kernel live patch security update fixes a total of five security vulnerabilities, which are documented as CVE-2018-11506, CVE-2018-11412, CVE-2018-13406, CVE-2018-13405, and CVE-2018-12233.

These include a stack-based buffer overflow ( CVE-2018-11506 ) discovered by Piotr Gabriel Kosinski and Daniel Shapira in Linux kernel's CDROM driver implementation, which could allow a local attacker to either execute arbitrary code or cause crash the system via a denial of service.

Discovered by Jann Horn, the kernel live patch also addresses a security vulnerability ( CVE-2018-11412 ) in Linux kernel's EXT4 file system implementation, which could allow an attacker to execute arbitrary code or crash the system via a denial of service by creating and mounting a malicious EXT4 image.

Also fixed are an integer overflow ( CVE-2018-13406 ) discovered by Silvio Cesare in Linux kernel's generic VESA frame buffer driver, as well as a buffer overflow ( CVE-2018-12233 ) discovered by Shankara Pailoor in the JFS file system implementation, both allowing local attackers to either crash the system or execute arbitrary code.

The last security vulnerability ( CVE-2018-13405 ) fixed in this latestUbuntu Linux kernel live patch may allow a local attacker to gain elevated privileges due to Linux kernel's failure to handle setgid file creation when the operation is performed by a non-member of the group.

All livepatch users must update immediately

The new Linux kernel live patch security update is available now for 64-bit (amd64) installations of the Ubuntu 18.04 LTS (Bionic Beaver), Ubuntu 16.04 LTS (Xenial Xerus), and Ubuntu 14.04 LTS (Trusty Tahr) operating system series that have the Canonical Livepatch Service active and running.

While Ubuntu 18.04.1 LTS and Ubuntu 16.04.5 LTS users must update the kernel packages to version 4.15.0-32.35 and 4.15.0-32.35~16.04.1 respectively, Ubuntu 14.04.5 LTS users will have to update their kernels to version 4.4.0-133.159~14.04.1 . A reboot is not required when installing a new kernel live patch. All livepatch users must update their systems immediately.

          Del Monte Canned Vegetables, Only $0.86 at Walmart!      Cache   Translate Page      

This is a wonderful way to stock up on canned food for you and your family. Grab multiple cans of vegetables for a discounted price on Walmart! Del Monte Whole Kernel Corn, 15.25 oz $0.98 $0.50/4 – Del Monte Canned Vegetables 10-18 oz  Final Price: $0.86 each Thanks, Krazy Coupon Lady!

The post Del Monte Canned Vegetables, Only $0.86 at Walmart! appeared first on Coupons and Freebies Mom.


          Vuln: Linux Kernel CVE-2018-5391 Remote Denial of Service Vulnerability      Cache   Translate Page      
Linux Kernel CVE-2018-5391 Remote Denial of Service Vulnerability
          Vuln: Linux Kernel CVE-2018-6554 Multiple Denial of Service Vulnerabilities      Cache   Translate Page      
Linux Kernel CVE-2018-6554 Multiple Denial of Service Vulnerabilities
          Recipe: Warm Up with Three Sisters Soup      Cache   Translate Page      

This soup features the ingredients of a Three Sisters Garden, a Native American tradition of growing corn, beans and squash together. Cooked together they make a great soup that is popular with vegetarians and vegans.

Three Sisters Soup is a soothing first course for a holiday meal or an everyday comfort food using the gorgeous local produce available in our Produce department. Omnivores: The soup pairs well with our new seasonal Seward-made sausage, available in the Meat department.

2 pounds of your favorite winter squash (butternut, acorn, kabocha)
2 to 3 tablespoons olive oil
1 yellow onion, diced
1/4 cup garlic, chopped
2 quarts vegetable stock or water
1/2 cup white wine
2 teaspoons dried thyme
1 large bay leaf
1 pound fresh or frozen corn kernels
2 15.5-ounce cans cannellini beans, drained
1/2 bunch green onions, sliced
Salt and pepper to taste

Preheat the oven to 350°F. Halve the squash and scoop out the seeds. Place the squash halves skin-side down on a lightly oiled baking sheet, and then roast until cooked through and soft, anywhere from 30 to 90 minutes (see tips below for cooking times depending on your squash). Remove from the oven and allow to cool.

Scoop the flesh of the squash into a large bowl (save any liquid!). Puree the cooled squash with a blender or food processor, adding some of the reserved liquid if needed.

In a large stockpot, heat the oil over medium heat and sauté the onions until they begin to brown. Add the garlic and cook, stirring often, until the garlic turns light brown in color.
Add the stock or water, wine, thyme, bay leaf and pureed squash and bring to a simmer. Stir in the remaining ingredients and simmer for 15 to 20 minutes. Taste and adjust seasoning as needed.

Tips & Notes
Squash cooking times will vary depending on the type and size of squash. At 350°F you can expect these approximate cooking times:

Acorn squash: 30-45 minutes
Kabocha squash: 40-50 minutes
Butternut squash: 60-90 minutes

Credit: National Co-op Grocers

          Canonical Outs New Linux Kernel Live Patch for Ubuntu 18.04 LTS and 16.04 LTS      Cache   Translate Page      

softpedia: Coming hot on the heels of the latest Linux kernel security update released by Canonical on Tuesday, the new Linux kernel live patch security update fixes a total of five security vulnerabilities.

          How To Install All Phone And Box Driver On Windows 10      Cache   Translate Page      

Install All Phone And Box Driver On Windows 10

  1. Click the Start Start menu and select Settings.

          #21 Descubren una vulnerabilidad Zero Day en el navegador Tor y la revelan porque ya no ganarán dinero con ella      Cache   Translate Page      

#6 Por supuesto que no y precisamente ese es el problema de la noticia: Tor Browser no trae desactivado javascript por default y usando sólo su menú "Secure Setting" lo único que hace es poner en modo estricto NoScript pero como la nota lo dice, si hay una vulnerabilidad en NoScript como aquí pasó, ya te comprometió. La recomendación por eso es entrar al about:config y desde ahí configurar todo para no depender de extensiones que pueden tener bugs.

Y no, usar Tails no es la solución ya que por default las apps siguen corriendo sin estar aisladas. Lo más recomendable es Subgraph o Qubes donde Tor Browser queda siempre en un sandbox y nunca puede ver el resto del sistema. Además que Subgraph trae el kernel grsecurity/pax, lo más seguro que hay hoy en día en Linux.

Y eso sin mencionar la necesidad de Mac spoof y otras cosas.

» autor: saulot

          MAGIX Independence Pro Library v3.0 DVD9-R2R      Cache   Translate Page      
MAGIX Independence Pro Library v3.0 DVD9-R2R
MAGIX Independence Pro Library v3.0 DVD9-R2R
Team R2R | 38 GB

Independence is the ultimate sampler workstation for professional music production in the studio and for live productions - Independence Pro and Independence Basic also available now. Independence's Audio Engine has been redeveloped and improved, it now contains the Time-Stretching & Pitch-Shifting options. Using the innovative Multi Core support you can specify the number of kernels on your computer should be reserved for Independence. This ensures that Independence has the highest amount of CPU resources at its disposal, causing problems for other processes.
          BlackBerry to spark ultra-secure hyperconnectivity with new EoT platform      Cache   Translate Page      
BlackBerry has unveiled BlackBerry Spark, the only Enterprise of Things (EoT) platform designed and built for ultra-secure hyperconnectivity from the kernel to the edge.
          BlackBerry to Spark Ultra-Secure Hyperconnectivity with New EoT Platform      Cache   Translate Page      

BlackBerry Limited has unveiled BlackBerry Spark, the only Enterprise of Things (EoT) platform designed and built for ultra-secure hyperconnectivity from the kernel to the edge. Defined as the interconnectedness of people, organizations, and machines, hyperconnectivity is set to revolutionize the way people work and live. BlackBerry’s new platform enables: OEMs to make complex ‘things’, like […]

The post BlackBerry to Spark Ultra-Secure Hyperconnectivity with New EoT Platform appeared first on Telecom Drive.

          MAGIX Independence Pro Library v3.0 DVD9-R2R      Cache   Translate Page      
MAGIX Independence Pro Library v3.0 DVD9-R2R
MAGIX Independence Pro Library v3.0 DVD9-R2R
Team R2R | 38 GB

Independence is the ultimate sampler workstation for professional music production in the studio and for live productions - Independence Pro and Independence Basic also available now. Independence's Audio Engine has been redeveloped and improved, it now contains the Time-Stretching & Pitch-Shifting options. Using the innovative Multi Core support you can specify the number of kernels on your computer should be reserved for Independence. This ensures that Independence has the highest amount of CPU resources at its disposal, causing problems for other processes.
          LXer: Support for a LoRaWAN Subsystem      Cache   Translate Page      
Published at LXer: Sometimes kernel developers find themselves competing with each other to gettheir version of a particular feature into the kernel. But sometimesdevelopers discover they've been...
          cfg80211 world regulatory domain setting has NO-IR feature (Intel 9260)      Cache   Translate Page      
Hi, I am using Intel 9260 with kernel version 4.9. when checked: iw reg get, i see NO-IR flags being put on few channel groups. NO-IR stands for No initiated Radiator...
           KN-7000D High powerSMD LED Photodynamic therapy red blue yellow bio light therapy PDT skin beauty machine      Cache   Translate Page      
Outstanding performance PDT device! Kernel Photodynamic Therapy KN-7000D Trichromatic SMD-LED light source Dose/Time work mode Kernel Medical Equipment Co.,LTD
          Concentration for Coulomb gases on compact manifolds. (arXiv:1809.04231v1 [math.PR])      Cache   Translate Page      

Authors: David García-Zelada

We study the non-asymptotic behavior of a Coulomb gas on a compact Riemannian manifold. This gas is a symmetric n-particle Gibbs measure associated to the two-body interaction energy given by the Green function. We encode such a particle system by using an empirical measure. Our main result is a concentration inequality in Kantorovich-Wasserstein distance inspired from the work of Chafa\"i, Hardy and Ma\"ida on the Euclidean space. Their proof involves large deviation techniques together with an energy-distance comparison and a regularization procedure based on the superharmonicity of the Green function. This last ingredient is not available on a manifold. We solve this problem by using the heat kernel and its short-time asymptotic behavior.

          Hybrid matrix compression for high-frequency problems. (arXiv:1809.04384v1 [math.NA])      Cache   Translate Page      

Authors: Steffen Börm, Christina Börst

Boundary element methods for the Helmholtz equation lead to large dense matrices that can only be handled if efficient compression techniques are used. Directional compression techniques can reach good compression rates even for high-frequency problems.

Currently there are two approaches to directional compression: analytic methods approximate the kernel function, while algebraic methods approximate submatrices. Analytic methods are quite fast and proven to be robust, while algebraic methods yield significantly better compression rates.

We present a hybrid method that combines the speed and reliability of analytic methods with the good compression rates of algebraic methods.

          A random geometric social network with Poisson point measures. (arXiv:1809.04388v1 [math.PR])      Cache   Translate Page      

Authors: Ahmed Sid-Ali, Khader Khadraoui

We formalize the problem of modeling social networks into Poisson point measures. We obtain a simple model that describes each member of the network at virtual state as a Dirac measure. We set the exact Monte Carlo scheme of this model and its representation as a stochastic process. By assuming that the spatial dependence of the kernels and rates used to build the model is bounded in some sense, we show that the size of the network remains bounded in expectation over any finite time. By assuming the compactness of the virtual space, we study the extinction and the survival properties of the network. Furthermore, we use a renormalization technique, which has the effect that the density of the network population must grow to infinity, to prove that the rescaled network converges in law towards the solution of a deterministic equation. Finally, we use our algorithm for some numerical simulations.

          Quantum chromodynamics through the geometry of M\"{o}bius structures. (arXiv:1809.04457v1 [physics.gen-ph])      Cache   Translate Page      

Authors: John Mashford

This paper describes a rigorous mathematical formulation providing a divergence free framework for QCD and the standard model in curved space-time. The starting point of the theory is the notion of covariance which is interpreted as (4D) conformal covariance rather than the general (diffeomorphism) covariance of general relativity. It is shown how the infinitesimal symmetry group (i.e. Lie algebra) of the theory, that is $su(2,2)$, is a linear direct sum of $su(3)$ and the algebra ${\mathfrak\kappa}\cong sl(2,{\bf C})\times u(1)$, these being the QCD algebra and the electroweak algebra. Fock space which is a graded algebra composed of Hilbert spaces of multiparticle states where the particles can be fermions such as quarks and electrons or bosons such as gluons or photons is described concretely. Algebra bundles whose typical fibers are the Fock spaces are defined. Scattering processes are associated with covariant linear maps between the Fock space fibers which can be generated by intertwining operators between the Fock spaces. It is shown how quark-quark scattering and gluon-gluon scattering are associated with kernels which generate such intertwining operators. The rest of the paper focusses on QCD vacuum polarization in order to compute and display the running coupling constant for QCD at different scales. Through an easy application of the technique called the spectral calculus the densities associated with the quark bubble and the gluon bubble are computed and hence the QCD vacuum polarization function is determined. It is found that the QCD running coupling constant has non-trivial behavior particularly at the subnuclear level. Asymptotic freedom and quark confinement are proved.

          Generalized Jacobians and explicit descents. (arXiv:1601.06445v2 [math.NT] UPDATED)      Cache   Translate Page      

Authors: Brendan Creutz

We develop a cohomological description of various explicit descents in terms of generalized Jacobians, generalizing the known description for hyperelliptic curves. Specifically, given an integer $n$ dividing the degree of some reduced effective divisor $\mathfrak{m}$ on a curve $C$, we show that multiplication by $n$ on the generalized Jacobian $J_\frak{m}$ factors through an isogeny $\varphi:A_{\mathfrak{m}} \rightarrow J_{\mathfrak{m}}$ whose kernel is naturally the dual of the Galois module $(\operatorname{Pic}(C_{\overline{k}})/\mathfrak{m})[n]$. By geometric class field theory, this corresponds to an abelian covering of $C_{\overline{k}} := C \times_{\operatorname{Spec}{k}} \operatorname{Spec}(\overline{k})$ of exponent $n$ unramified outside $\mathfrak{m}$. The $n$-coverings of $C$ parameterized by explicit descents are the maximal unramified subcoverings of the $k$-forms of this ramified covering. We present applications of this to the computation of Mordell-Weil groups of Jacobians.

          Remember the Curse of Dimensionality: The Case of Goodness-of-Fit Testing in Arbitrary Dimension. (arXiv:1607.08156v3 [math.ST] UPDATED)      Cache   Translate Page      

Authors: Ery Arias-Castro, Bruno Pelletier, Venkatesh Saligrama

Despite a substantial literature on nonparametric two-sample goodness-of-fit testing in arbitrary dimensions spanning decades, there is no mention there of any curse of dimensionality. Only more recently Ramdas et al. (2015) have discussed this issue in the context of kernel methods by showing that their performance degrades with the dimension even when the underlying distributions are isotropic Gaussians. We take a minimax perspective and follow in the footsteps of Ingster (1987) to derive the minimax rate in arbitrary dimension when the discrepancy is measured in the L2 metric. That rate is revealed to be nonparametric and exhibit a prototypical curse of dimensionality. We further extend Ingster's work to show that the chi-squared test achieves the minimax rate. Moreover, we show that the test can be made to work when the distributions have support of low intrinsic dimension. Finally, inspired by Ingster (2000), we consider a multiscale version of the chi-square test which can adapt to unknown smoothness and/or unknown intrinsic dimensionality without much loss in power.

          Next-order asymptotic expansion for N-marginal optimal transport with Coulomb and Riesz costs. (arXiv:1706.06008v3 [math-ph] UPDATED)      Cache   Translate Page      

Authors: Codina Cotar, Mircea Petrache

Motivated by a problem arising from Density Functional Theory, we provide the sharp next-order asymptotics for a class of multimarginal optimal transport problems with cost given by singular, long-range pairwise interaction potentials. More precisely, we consider an $N$-marginal optimal transport problem with $N$ equal marginals supported on $\mathbb R^d$ and with cost of the form $\sum_{i\neq j}|x_i-x_j|^{-s}$. In this setting we determine the second-order term in the $N\to\infty$ asymptotic expansion of the minimum energy, for the long-range interactions corresponding to all exponents $0<s<d$, and for all densities in $L^{1+\frac{s}{d}}(\mathbb R^d)$. We also prove a small oscillations property for this second-order energy term. Our results can be extended to a larger class of models than power-law-type radial costs, such as non-rotationally-invariant costs. The key ingredient and main novelty in our proofs is a robust extension and simplification of the Fefferman-Gregg decomposition (Fefferman 1985, Gregg 1989), extended here to our class of kernels, and which provides a unified method valid across our full range of exponents. Our first result generalizes a recent work of Lewin, Lieb and Seiringer (2017), who dealt with the second-order term for the Coulomb case $s=1,d=3$, for continuous slowly-varying densities, by different methods.

          Degasperis-Procesi peakon dynamical system and finite Toda lattice of CKP type. (arXiv:1712.08306v3 [nlin.SI] UPDATED)      Cache   Translate Page      

Authors: Xiang-Ke Chang, Xing-Biao Hu, Shi-Hao Li

In this paper, we propose a finite Toda lattice of CKP type (C-Toda) together with a Lax pair. Our motivation is based on the fact that the Camassa-Holm (CH) peakon dynamical system and the finite Toda lattice may be regarded as opposite flows in some sense. As an intriguing analogue to the CH equation, the Degasperis-Procesi (DP) equation also supports the presence of peakon solutions. Noticing that the peakon solution to the DP equation is expressed in terms of bimoment determinants related to the Cauchy kernel, we impose opposite time evolution on the moments and derive the corresponding bilinear equation. The corresponding quartic representation is shown to be a continuum limit of a discrete CKP equation, due to which we call the obtained equation finite Toda lattice of CKP type. Then, a nonlinear version of the C-Toda lattice together with a Lax pair is derived. As a result, it is shown that the DP peakon lattice and the finite C-Toda lattice form opposite flows under certain transformation.

          $d$-dimensional SYK, AdS Loops, and $6j$ Symbols. (arXiv:1808.00612v2 [hep-th] UPDATED)      Cache   Translate Page      

Authors: Junyu Liu, Eric Perlmutter, Vladimir Rosenhaus, David Simmons-Duffin

We study the $6j$ symbol for the conformal group, and its appearance in three seemingly unrelated contexts: the SYK model, conformal representation theory, and perturbative amplitudes in AdS. The contribution of the planar Feynman diagrams to the three-point function of the bilinear singlets in SYK is shown to be a $6j$ symbol. We generalize the computation of these and other Feynman diagrams to $d$ dimensions. The $6j$ symbol can be viewed as the crossing kernel for conformal partial waves, which may be computed using the Lorentzian inversion formula. We provide closed-form expressions for $6j$ symbols in $d=1,2,4$. In AdS, we show that the $6j$ symbol is the Lorentzian inversion of a crossing-symmetric tree-level exchange amplitude, thus efficiently packaging the double-trace OPE data. Finally, we consider one-loop diagrams in AdS with internal scalars and external spinning operators, and show that the triangle diagram is a $6j$ symbol, while one-loop $n$-gon diagrams are built out of $6j$ symbols.

          Update: browser+ and file manager (Utilities)      Cache   Translate Page      

browser+ and file manager 2.0.2

Device: iOS iPhone
Category: Utilities
Price: Free, Version: 2.0.1 -> 2.0.2 (iTunes)


New kernel, version upgrade, speed Internet access, more features!
Use a high-speed browser to be smooth, giving you an ultra-fast experience of opening a web page in seconds!

New features:
*New design
New interface upgrade, remove cumbersome interaction, focus on efficient browsing
*Fast browsing
New kernel, high-speed web page response saves time and current
*Smart Search
Massive results accurate association words, direct search results, a search
*Safe and secure
Comprehensively intercept malicious websites, remove ads and browse quietly, restore pure mobile phone space
*Efficient features
Bookmarks / history, screenshots, advertising ... only give you the most important and simple auxiliary tools
*One is all
Collect all kinds of common URLs and experience one-stop ultra-fast Internet experience

We focus on efficiency, return to the essence of the browser, sincerely recommend, welcome to download

What's New


browser+ and file manager

          Finalist Drupal Blog: Launching a succesful educational Portal      Cache   Translate Page      


Today, we will take you on a journey through some important insights we achieved as builders of educational portals.

Portals in which Drupal plays a part and how we managed to create added value to educational portals we built over the years.

Of course we like to give some examples in how we succeeded, but it is just as interesting to look at some flaws that actually helped us to do it better the next time.

Educational portal?

But first, just what is an educational portal?


With the word educational we actually say two things:

Educational in a sense that it involves around education.

Educational in a sense that the portal brings more knowledge or insight to the user.

Another word for a portal might be entrance. That said, an educational portal has a broad understanding. In this talk, we would like to focus at applications that have a clear internal focus for our university as a whole, our students, our teachers and staff. You can think of electronic learning platforms, digital learning (and working) environments and intranet systems for universities.

Recent years: digital workspace

digital workspace

As part of the digital transformation every university is going through, the term “digital workspace” is floating around in this context. A digital workspace brings together all the aforementioned subsystems into one intuitive, platform.

We’ll touch on that subject later on.

Role of Drupal

top universities

Secondly, how do educational portals / digital workspaces relate to Drupal?

Universities around the world have been using Drupal for some years now, even going back to version 4.x. Drupal is particularly popular because of:

  • High modularity

  • Flexible API for integrations

  • Identity and access management

  • Authentication with external providers, OAuth, SSO in place via contribs

  • Open source nature / General Public License

  • Very flexible but yet fine-grained management of roles & permissions

And that is exactly where we would like to start today.

Target Audiences


We could say that the typical challenge of education is the broad collection of target audiences.

When developing an educational portal it’s important to know your target audience, not only are you gonna deal with Teachers and Students and cater to their needs, but you’d also have to keep in mind that Parents may be visiting the site, as are Alumni, Heads of Faculties, potential sponsors, researchers, general staf, journalists and the general public.

And we are probably still not complete in our list.


One way to tackle this is making use of Persona’s, a method of visualising your potential users.

With this method you create fictional characters that represent one of the user roles. (Afbeelding user roles)

With the persona’s defined you can make an educated guess of the type of user journey the users of the portal are gonna follow.

The next step is wire framing. An efficient way to achieve a shared view on “what we need” is to invite the target audiences to literally DRAW what they have in mind and bring all these views together in user experience sessions.


After this, we can use these views in wire frames. This is quite essential in managing expectations. And there is a hidden advantage in this way of working: it can be a superb way of bringing together groups that are not necessarily ‘best friends’ or at least have opposite goals.

Prototyping the application and perform usertests with a select group of users which represent the roles defined earlier.


[dit nog aanvullen en bruggetje naar technische tip hieronder]

From a Drupal perspective we would like to share another important insight we achieved during development of portals. As we concluded that Drupal has a flexible basis for role and access management, we need to make sure it is still manageable. The actual handing out of permissions can of course be carried out in Drupal itself, but large organisations should prevent this multilayered approach. In easier words: we want to make sure all permissions are stored in one central location, like for instance Active Directory. In the end this will prevent flaws like abusing the system while no one notices it.

Politics in Education


Working with large educational institutes brings some special challenges in the form of different fractions in the organisation. There are not only the IT and business side of the organisation, but also lots of different faculties who all think they are the most important faculty of the university.

Getting all these different teams on the same page can be a daunting task and sometimes lead to extensive rework on your project.

Essential in preventing these issues is understanding what the goal of the various stakeholders is and realising that, sometimes, it just isn’t possible to please everybody and still have a great product, so you have to make compromises now and then.

There are however some factors which can either make your life a little better, the most important being a good product owner and a competent analyst to really get a feel of what is essential in your project.

Another crucial part of the process is to make proper wireframes, mockups and have a clear content strategy so all parties involved can get a good feel of the expected functionalities. Make sure everybody is on the same page as early in the process as possible!

Also having proper Persona’s and have people involved take a good survey can be a great help in preventing bickering and arguing.


Organisations in Higher Education probably already have a multitude of systems and programs that need to be incorporated in some way in the portal. Examples of types of application you’d have to interface with are: HR applications, Scheduling programs, Learning Management systems, Publications repositories, mailing lists, external websites, Document Repositories, Course management software, and so on, the list seems endless.


Of course you could write an importer for the xml which comes from the HR application, a feed processor for the external websites’ RSS feed and a file fetcher and processor for the archaïc publication repository.

The universities we saw do not have 3 systems.


Abetter way to handle all these streams of data would be to create a standalone piece of software to act as a middleman, a so called Enterprise Service Bus or ESB.

Garbage in, Garbage out!


The ESB is built to adapt multiple intgrations and normalize the data, which is distributed in a uniform way to our portal and any other clients. With an enterprise service bus Drupal only has to take care of a standardized and centralized integration. This heavily reduces complexity in our portal.

Some of the advantages of using an ESB are:

  • decoupling the requesting party and the distributing party

  • Simplifying and standardising the interfaces between the two parties

  • Stimulating the re-use of data (as they are centrally available it promotes the re-use)

  • Centralised location of monitoring of services

  • Reducing time-to-market

  • Sanetising and validating


While the ideal of an ESB is great, reality is unfortunately different and in some cases you will have to manage external connections to the portal within Drupal.

This simply means that there will probably exist some point-to-point integrations in your portal.

To handle this not so ideal situation, we should implement some control management inside Drupal.

To be more specific: standardize this within your Drupal application.

We need a referee


A Gatekeeper, or, as you wish, some kind of referee

This will require two essential things for each integration:

Some sort of gatekeeper functionality which will prevent to import garbage.
Proper logging system which will help keeping track of unwanted side effects of integrations with third party software.



Yes, it is a clock and it is a quarter to nine. True.

It actually represents the starting time of the students who were going to use the new portal first day at school after holiday break. We proudly launched the portal the week before. As teachers were already using it, we had a positive shared view on the use and especially the performance of the system. But, as the students day schedule now was part of the portal, and somehow we could have foreseen that, well, EVERYONE would check their day schedule at the latest moment possible, we ran into some big time performance problems.

This is a typical example of peak traffic. We hadn’t taken peak times into account.

As a development team we found out that we failed to address the cost of quality on this matter. It would have been better to have some proper stress testing in place.

So, we quickly fixed it by shoveling some extra power to our servers and immediately sitting down with IT people of our client.


Although it is quite tempting. Running away will eventually bring more problems. We sat down with IT people and created the solution we wanted.


Different types of tests

  • Unit / Kernel / Browser & Javascript tests
    Tests which check if your code is working as supposed

  • Behavior tests (e.g. Behat)
    With behavioral test you run scenario’s / user stories

  • Visual Regression tests (e.g. BackstopJS)
    Visual regression tests check visually if anything changed on the page

  • Performance tests (e.g. JMeter)
    Test the performance of the application

Performance testing = Being prepared


Some general steps to running tests on your application.

  • Analyse existing data

    • Google Analytics / Logs

    • What are the top pages

    • What pages give errors

  • Prepare testscenario

    • Use the results of the analysis
  • Configure tooling

    • Pick your tool (Jmeter?)
  • Run the test

  • Analyse results

    • Profiling & monitoring



APDEX is a standard for measuring user satisfaction based on pageload time.

Basically it works like this, you set a baseline that’s acceptable and one that’s frustrating for your application (which for an LMS might be a different baseline then for a webshop). Then when you run your test, firing of a gazzilion requests to your application, you get a set of results mapped to your baselines following a pretty simple formula:



APDEX is not the holy grail

Nowadays there are a lot of onepage / javascript apps, you have bigpipe which skews results. Also the resulting APDEX score is an average, so shifting the numbers might give you the same score, while the frustrated results can be higher.

So you should always use monitoring, alerts and, if available analytics to be sure that during expected peak times the system is working as expected.

A nice thing to mention here is the current trend of containerisation of environments, making use of systems like Docker, Kubernetes and OpenShift. A hugely effective feature is autoscaling of the environment without facing downtime. For the first day, when facing problems of performance nature, it can take away the harshness of coping with organisational agitation and disgrace. Moreover, it gives you time to fix things the right way.

Technical Choices / architecture

cloud 1

So we were talking about the ESB. What would happen if we considered Drupal as actually being one of the distributing systems, a client to the ESB? We would simply consider Drupal as a content management system, out there to serve content to whatever external system we want.

This is typically the case when operating in an omnichannel communication strategy.

cloud 1

A user experience is partly determined by the online ‘obstacles’ that a user encounters. Removing frictions within the customer journeys of users makes the experience positive. We also believe that omnichannel communication is becoming increasingly important in the online world. In concrete terms, this means that the publication and distribution of content and information via a website is no longer sufficient. Channels such as (native and mobile) apps and other websites are becoming more and more part of the communication strategy. It is crucial that a CMS has the right interfaces and tools to make this possible. The CMS as a publishing machine, which provides direct communication to all channels and chain partners in combination with a front-end interface for the distribution of function, form and content, will form the basis.

Go away, obstacle!

The news here is: Drupal is not necessarily the portal itself.

In fact, we are aiming to serve a unified experience for our users across all channels.


A definition:

“The digital workspace is a new concept that enables tech-savvy employees to access the systems and tools they need from any device—smartphone, tablet, laptop, or desktop—regardless of location. ”


And that, in our opinion, is a very unpretentious definition. One could image that, for instance, when you work together in a group for a school project, that all these subsystems “know” you and also “know” you are part of that group. When asking questions to tutors that apply to the group, you would expect the whole group.

          Learning Deep Mixtures of Gaussian Process Experts Using Sum-Product Networks. (arXiv:1809.04400v1 [cs.LG])      Cache   Translate Page      

Authors: Martin Trapp, Robert Peharz, Carl E. Rasmussen, Franz Pernkopf

While Gaussian processes (GPs) are the method of choice for regression tasks, they also come with practical difficulties, as inference cost scales cubic in time and quadratic in memory. In this paper, we introduce a natural and expressive way to tackle these problems, by incorporating GPs in sum-product networks (SPNs), a recently proposed tractable probabilistic model allowing exact and efficient inference. In particular, by using GPs as leaves of an SPN we obtain a novel flexible prior over functions, which implicitly represents an exponentially large mixture of local GPs. Exact and efficient posterior inference in this model can be done in a natural interplay of the inference mechanisms in GPs and SPNs. Thereby, each GP is -- similarly as in a mixture of experts approach -- responsible only for a subset of data points, which effectively reduces inference cost in a divide and conquer fashion. We show that integrating GPs into the SPN framework leads to a promising probabilistic regression model which is: (1) computational and memory efficient, (2) allows efficient and exact posterior inference, (3) is flexible enough to mix different kernel functions, and (4) naturally accounts for non-stationarities in time series. In a variate of experiments, we show that the SPN-GP model can learn input dependent parameters and hyper-parameters and is on par with or outperforms the traditional GPs as well as state of the art approximations on real-world data.

          Barrier-Certified Adaptive Reinforcement Learning with Applications to Brushbot Navigation. (arXiv:1801.09627v2 [cs.LG] UPDATED)      Cache   Translate Page      

Authors: Motoya Ohnishi, Li Wang, Gennaro Notomista, Magnus Egerstedt

This paper presents a safe learning framework that employs an adaptive model learning method together with barrier certificates for systems with possibly nonstationary agent dynamics. To extract the dynamic structure of the model, we use a sparse optimization technique, and the resulting model will be used in combination with control barrier certificates which constrain policies (feedback controllers) in order to maintain safety, which refers to avoiding certain regions of the state space. Under certain conditions, recovery of safety in the sense of Lyapunov stability after violations of safety due to the nonstationarity is guaranteed. In addition, we reformulate action-value function approximation to make any kernel-based nonlinear function estimation method applicable to our adaptive learning framework. Lastly, solutions to the barrier-certified policy optimization are guaranteed to be globally optimal, ensuring greedy policy updates under mild conditions. The resulting framework is validated via simulations of a quadrotor, which has been used in the safe learnings literature under {\em stationarity} assumption, and then tested on a real robot called {\em brushbot}, whose dynamics is unknown, highly complex, and most probably nonstationary.

          Information Constraints on Auto-Encoding Variational Bayes. (arXiv:1805.08672v2 [cs.LG] UPDATED)      Cache   Translate Page      

Authors: Romain Lopez, Jeffrey Regier, Michael I. Jordan, Nir Yosef

Parameterizing the approximate posterior of a generative model with neural networks has become a common theme in recent machine learning research. While providing appealing flexibility, this approach makes it difficult to impose or assess structural constraints such as conditional independence. We propose a framework for learning representations that relies on Auto-Encoding Variational Bayes and whose search space is constrained via kernel-based measures of independence. In particular, our method employs the $d$-variable Hilbert-Schmidt Independence Criterion (dHSIC) to enforce independence between the latent representations and arbitrary nuisance factors. We show how to apply this method to a range of problems, including the problems of learning invariant representations and the learning of interpretable representations. We also present a full-fledged application to single-cell RNA sequencing (scRNA-seq). In this setting the biological signal in mixed in complex ways with sequencing errors and sampling effects. We show that our method out-performs the state-of-the-art in this domain.

          Canonical Outs New Linux Kernel Live Patch for Ubuntu 18.04 LTS and 16.04 LTS      Cache   Translate Page      
Coming hot on the heels of the latest Linux kernel security update released by Canonical on Tuesday, the new Linux kernel live patch security update fixes a total of five security vulnerabilities, which are documented as CVE-2018-11506, CVE-2018-11412, CVE-2018-13406, CVE-2018-13405, and CVE-2018-12233. These include a stack-based buffer overflow (CVE-2018-11506) discovered by Piotr Gabriel Kosinski and
          GNU Nano 3.0 claims to read files 70% better with improved ASCII text handling      Cache   Translate Page      
he Linux landscape is undergoing changes and developments constantly. Fresh distro releases, updates, kernels and apps keep appearing continuously. This week also Linux released several updates including a significant new version of the open source text editor known as Nano 3.0, code named as “Water Flowing Underground”. GNU Nano is one of the most famous
          LFS 8.3 boot error      Cache   Translate Page      
Hei. I just compiled linux kernel and added wpa_supplicant-2.6 kernel options and after that my LFS does not boot anymore. And I cannot find any kernel messages in lfs chroot in host distro. dmesg and...
          Setting up IP Passthrough on Linux      Cache   Translate Page      
Hello, I am running an embedded linux kernel version 3.14. My system has two network interfaces, a 4G LTE modem (ppp0) and an ethernet port (eth0). I would like to setup IP Passthrough on my...
          Dracut Shell after kernel upgrade      Cache   Translate Page      
Story goes, I have a set of servers i built then i have a set of servers that were here before me. Same release level of Centos 7 Same bios same Config for the most part. When i updated the...
          Гибридные процессоры AMD Picasso упоминаются в драйверах Linux      Cache   Translate Page      
В очередной раз драйвер AMD с открытым исходным кодом для операционных систем на базе ядра Linux служит источником информации о будущих продуктах. В последних патчах для Linux Kernel Driver, компания... Читать далее »
          Kernel panic on EC2 launch from AMI      Cache   Translate Page      
          Download Linux Kernel Development (Developer s Library) - Robert Love [Full Download]      Cache   Translate Page      

Download here Download Linux Kernel Development (Developer s Library) - Robert Love [Full Download] Read online : [ LINUX KERNEL DEVELOPMENT BY LOVE, ROBERT](AUTHOR)PAPERBACK
          Scientists say 25 years left to fight climate change      Cache   Translate Page      

You can think of global warming kind of like popping a bag of popcorn in the microwave.

Anthropogenic, or human-caused, warming has been stoked by increasing amounts of heat-trapping pollution since the start of the industrial age more than 200 years ago. But that first hundred years or so was kind of like the first minute for that popcorn — no real sign of much happening.

But then you get to that second minute, and the kernels really start doing their thing. And you can think of all those individual pops as extreme weather events — superstorms, extreme downpours, high-tide flooding, droughts, melting glaciers, ferocious wildfires. They’re like the signals that the climate is changing.

And in popcorn terms, “we are in that second minute,” says Inez Fung, an atmospheric scientist at UC Berkeley — in the throes of a problem we can now see unfolding all around us.

"Thirty years ago we predicted it in the models, and now I'm experiencing it,” Fung says. “You see the fires in the western US and British Columbia. And then at the same time, we’ve got fires, it rained three feet in Hilo, Hawaii, from [a] hurricane — that is a new record at the same time that we have droughts and fires, over 300 people died in India from floods. We are not prepared. "

Across the US the average temperature has risen almost two degrees Fahrenheit since the start of the 20th century. And that’s only the beginning, says Bill Collin, who directs climate and ecological science at Lawrence Berkeley National Lab.

"We released enough carbon dioxide to continue warming the climate for several centuries to come," Collins says.

And he says that means a certain amount of future warming is already "baked in, if you will."

In other words, going back to that popcorn metaphor, even if you hit the stop button on the oven, some of those kernels will keep popping.

"If we were to stop emissions entirely of all greenhouse gases right this minute," Collins says, "we'd see roughly another half a degree centigrade by the end of the 21st century."

That’s almost a full degree Fahrenheit already in the pipeline. So even if we shut down all emissions — which is not happening — we might still get to the threshold of two degrees Celsius, or 3.5 degrees Fahrenheit, warming from pre-industrial levels, at which point many scientists say the worst effects of climate change would kick in.

"We're seeing years now that basically blow the roof off of records back to the late 19th century," notes Collins — and then a remarkable thought occurs to him:

"None of the students in my classes have grown up in a normal climate. None of them."

On the flipside, if you’re over, say 30, and can actually recall “normal," well, that’s over.

"I have to say that all the projections that were made 30 years ago are still valid," says Fung. "The only thing we had not anticipated ... is that the CO2 increases much faster than we ever thought that it would."

Despite the pledges made in Paris by nearly every nation in the world (the US under the Trump administration is alone among signatories in backing out of the climate accord), emissions are still rising. And even those historic commitments — if they’re all kept — won’t be enough to turn things around.

"No, we're already beyond that," says Fung. "The commitments, I think, are a very good start, but they're just not adequate."

All this grim talk might lead one to ask what point there is in trying to reverse the climate train.

But recently refined climate models suggest that aggressively cutting emissions could at least blunt the impact of continued warming. It could, for example, reduce periods of extreme heat in California’s capital Sacramento from two weeks a year to as little as two days. The snowpack in the state’s Sierra Nevada mountains might shrink by “just” 20 percent, rather than 75 percent.

That’s the optimistic scenario.

The Global Climate Action Summit being held his week in San Francisco will pull together mayors, state and provincial governors, scientists and corporate leaders from around the world and the US to try to keep momentum going with what are known as "subnational" actions to reduce greenhouse gas emissions — things done at the local, state and regional level.

They'll be joined by major players such as California governor Jerry Brown, who organized the conference and has helped position the state as a global leader in the fight to step climate change; former Vice President Al Gore; and former Secretary of State John Kerry, who signed the Paris accord on behalf of the US with his tiny granddaughter perched on his lap.

One of the themes attendees will discuss is "key building blocks required to peak global emissions by 2020," a goal that seems wildly optimistic given current emissions trajectories, and with barely more than two years to go.

"First thing we have to do as a global community is reverse course rather sharply," says Collins. "We think it is technically feasible."

Technically feasible, perhaps, but not easy. California, for instance, has the most aggressive efforts to cut greenhouse gases in the US and overall, it’s working — total emissions are down 13 percent since 2004. Still, climate emissions from cars and trucks are on the rise.

"Our cars are literally our time machines," Collins says. And they’re taking us backward.

"They're taking the atmosphere to a chemical state that it has not been in for millions of years," he says. "Currently, we have as much carbon dioxide in the Earth's atmosphere as we did five million years ago."

And that was a very different world, long before humans ever showed up.

In the space of a little over 230 years since the start of industrialization, Collins says "our steam engines, our factories, our cars … they've taken us back five million years."

And Collins says we have about 25 years — roughly one generation — to reverse course.

Collins and Fung both have their glimmers of optimism that technology and the boom in solar, wind and other forms of clean energy could quickly reduce climate emissions. Fung also points to the young college students passing by us on the Berkeley campus as her best hope.

"I am optimistic about the young people,” she says. “I'm optimistic that they are … very proactive about the future."

But she and Collins agree that what’s running out is time.

          Windows kernel device driver development      Cache   Translate Page      
We are seeking an engineer familiar with device driver development for Windows, that can take one of the example drivers of Microsoft (available on web) and add connectivity and logging capabilities to it. Approx. time is 20-40h, a very short project. Work from home or at our offices at your convenience. Deliverables are source code, and compiled and operational binaries for x86 and x64 architectures.
          Windows driver development      Cache   Translate Page      
Estimated on basis of minispy/minifsData flow: Kernel-User-"Our Sw". Memory targets: 1M-1M-30M. Driver need to capture all file events (read, write, modify, delete etc) and memory device events as connect or disconnect.It User level is expected to response "Our-Sw" request by sending the next event it capture in order of capturing, and cleare its list when told to do so. Driver will accumulate file-changes across system power-downs, until requested.

Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10
Site Map 2018_08_11
Site Map 2018_08_12
Site Map 2018_08_13
Site Map 2018_08_15
Site Map 2018_08_16
Site Map 2018_08_17
Site Map 2018_08_18
Site Map 2018_08_19
Site Map 2018_08_20
Site Map 2018_08_21
Site Map 2018_08_22
Site Map 2018_08_23
Site Map 2018_08_24
Site Map 2018_08_25
Site Map 2018_08_26
Site Map 2018_08_27
Site Map 2018_08_28
Site Map 2018_08_29
Site Map 2018_08_30
Site Map 2018_08_31
Site Map 2018_09_01
Site Map 2018_09_02
Site Map 2018_09_03
Site Map 2018_09_04
Site Map 2018_09_05
Site Map 2018_09_06
Site Map 2018_09_07
Site Map 2018_09_08
Site Map 2018_09_09
Site Map 2018_09_10
Site Map 2018_09_11
Site Map 2018_09_12
Site Map 2018_09_13