Next Page: 10000

          Pet trade unleashes exotic predators throughout Florida      Cache   Translate Page      
An array of exotic reptiles introduced to the wilds of Florida is posing a growing threat to the state’s bird life, according to a recent article on Audubon.org. It’s not just pet pythons set loose in the Everglades. “Farther north,” Audubon writes, “Nile monitors — the largest lizard in Africa— have been terrorizing a population of burrowing owls in the city of Cape Coral. And on the outskirts of Florida City, just outside Everglades National Park, egg-eating Argentine tegus could soon raid the nesting grounds of one of the last remaining populations of the endangered Cape Sable seaside sparrow.” The common denominator is the pet trade, Audubon reports, “but while most people acknowledge that’s a leaky pipeline, few agree on whether and how to plug it.” Read the full story here.
          I would like to hire a Face Recognition Expert - 12/09/2018 18:00 EDT      Cache   Translate Page      
I run a little business in photo service. We sell photo at local events ( sports , dancing etc ). At the moment I use Jalbum to create static gallery , a laptop as server , and some pc as clients. People... (Budget: €30 - €250 EUR, Jobs: Face Recognition, Javascript, PHP, Python, Web Development)
          python scrapy expert wanted on going      Cache   Translate Page      
Hello, I need an expert who can scrap 100K or more data scrap in a day, I have many more scraping works so If you are an expert about coding know how to scrap large data within a short time than this job for you... (Budget: $10 - $30 USD, Jobs: Data Mining, Python, Software Architecture, VBScript, Web Scraping)
          Diagnosing Heart Diseases with Deep Neural Networks      Cache   Translate Page      

The Second National Data Science Bowl, a data science competition where the goal was to automatically determine cardiac volumes from MRI scans, has just ended. We participated with a team of 4 members from Ghent University and finished 2nd!

The team kunsthart (artificial heart in English) consisted of Ira Korshunova, Jeroen Burms, Jonas Degrave, 3 PhD students, and professor Joni Dambre. It’s also a follow-up of last year’s team ≋ Deep Sea ≋, which finished in first place for the First National Data Science Bowl.

Overview

This blog post is going to be long, here is a clickable overview of different sections.

Introduction

The problem

The goal of this year’s Data Science Bowl was to estimate minimum (end-systolic) and maximum (end-diastolic) volumes of the left ventricle from a set of MRI-images taken over one heartbeat. These volumes are used by practitioners to compute an ejection fraction: fraction of outbound blood pumped from the heart with each heartbeat. This measurement can predict a wide range of cardiac problems. For a skilled cardiologist analysis of MRI scans can take up to 20 minutes, therefore, making this process automatic is obviously useful.

Unlike the previous Data Science Bowl, which had very clean and voluminous data set, this year’s competition required a lot more focus on dealing with inconsistencies in the way the very limited number of data points were gathered. As a result, most of our efforts went to trying out different ways to preprocess and combine the different data sources.

The data

The dataset consisted of over a thousand patients. For each patient, we were given a number of 30-frame MRI videos in the DICOM format, showing the heart during a single cardiac cycle (i.e. a single heartbeat). These videos were taken in different planes including the multiple short-axis views (SAX), a 2-chamber view (2Ch), and a 4-chamber view (4Ch). The SAX views, whose planes are perpendicular to the long axis of the left ventricle, form a series of slices that (ideally) cover the entire heart. The number of SAX slices ranged from 1 to 23. Typically, the region of interest (ROI) is only a small part of the entire image. Below you can find a few of SAX slices and Ch2, Ch4 views from one of the patients. Red circles on the SAX images indicate the ROI’s center (later we will explain how to find it), for Ch2 and Ch4 they specify the location of SAX slices projected on the corresponding view.

sax_5 sax_9 sax_10 sax_11 sax_12 sax_15
2Ch 4Ch

The DICOM files also contained a bunch of metadata. Some of the metadata fields, like PixelSpacing and ImageOrientationm were absolutely invaluable to us. The metadata also specified patient’s age and sex.

For each patient in the train set, two labels were provided: the systolic volume and the diastolic volume. From what we gathered (link), these were obtained by cardiologists by manually performing a segmentation on the SAX slices, and feeding these segmentations to a program that computes the minimal and maximal heart chamber volumes. The cardiologists didn’t use the 2Ch or 4Ch images to estimate the volumes, but for us they proved to be very useful.

Combining these multiple data sources can be difficult, however for us dealing with inconsistencies in the data was more challenging. Some examples: the 4Ch slice not being provided for some patients, one patient with less than 30 frames per MRI video, couple of patients with only a handful of SAX slices, patients with SAX slices taken in weird locations and orientations.

The evaluation

Given a patient’s data, we were asked to output a cumulative distribution function over the volume, ranging from 0 to 599 mL, for both systole and diastole. The models were scored by a Continuous Ranked Probability Score (CRPS) error metric, which computes the average squared distance between the predicted CDF and a Heaviside step function representing the real volume.

An additional interesting novelty of this competition was the two stage process. In the first stage, we were given a training set of 500 patients with a public test set of 200 patients. In the final week we were required to submit our model and afterwards the organizers released the test data of 440 patients and labels for 200 patients from the public test set. We think the goal was to compensate for the small dataset and prevent people from optimizing against the test set through visual inspection of every part of their algorithm. Hand-labeling in the first stage was allowed on the training dataset only, for the second stage it was also allowed for 200 validation patients.

The solution: traditional image processing, convnets, and dealing with outliers

In our solution, we combined traditional image processing approaches, which find the region of interest (ROI) in each slice, with convolutional neural networks, which perform the mapping from the extracted image patches to the predicted volumes. Given the very limited number of training samples, we tried combat overfitting by restricting our models to combine the different data sources in predefined ways, as opposed to having them learn how to do the aggregation. Unlike many other contestants, we performed no hand-labelling .

Pre-processing and data augmentation

The provided images have varying sizes and resolutions, and do not only show the heart, but the entire torso of the patient. Our preprocessing pipeline made the images ready to be fed to a convolutional network by going through the following steps:

  • applying a zoom factor such that all images have the same resolution in millimeters
  • finding the region of interest and extracting a patch centered around it
  • data augmentation
  • contrast normalization

To find the correct zooming factor, we made use of the PixelSpacing metadata field, which specifies the image resolution. Further we will explain our approach to ROI detection and data augmentation.

Detecting the Region Of Interest through image segmentation techniques

We used classical computer vision techniques to find the left ventricle in the SAX slices. For each patient, the center and width of the ROI were determined by combining the information of all the SAX slices provided. The figure below shows an example of the result.

ROI extraction steps
ROI extraction steps

First, as was suggested in the Fourier based tutorial, we exploit the fact that each slice sequence captures one heartbeat and use Fourier analyses to extract an image that captures the maximal activity at the corresponding heartbeat frequency (same figure, second image).

From these Fourier images, we then extracted the center of the left ventricle by combining the Hough circle transform with a custom kernel-based majority voting approach across all SAX slices. First, for each fourier image (resulting from a single sax slice), the highest scoring Hough circles for a range of radii were found, and from all of those, the highest scoring ones were retained. , and the range of radii are metaparameters that severely affect the robustness of the ROI detected and were optimised manually. The third image in the figure shows an example of the best circles for one slice.

Finally, a ‘likelihood surface’ (rightmost image in figure above) was obtained by combining the centers and scores of the selected circles for all slices. Each circle center was used as the center for a Gaussian kernel, which was scaled with the circle score, and all these kernels were added. The maximum across this surface was selected as the center of the ROI. The width and height of the bounding box of all circles with centers within a maximal distance (another hyperparameter) of the ROI center were used as bounds for the ROI or to create an ellipsoidal mask as shown in the figure.

Given these ROIs in the SAX slices, we were able to find the ROIs in the 2Ch and 4Ch slices by projecting the SAX ROI centers onto the 2Ch and 4Ch planes.

Data augmentation

As always when using convnets on a problem with few training examples, we used tons of data augmentation. Some special precautions were needed, since we had to preserve the surface area. In terms of affine transformations, this means that only skewing, rotation and translation was allowed. We also added zooming, but we had to correct our volume labels when doing so! This helped to make the distirbution of labels more diverse.

Another augmentation here came in the form of shifting the images over the time axis. While systole was often found in the beginning of a sequence, this was not always the case. Augmenting this, by rolling the image tensor over the time axis, made the resulting model more robust against this noise in the dataset, while providing even more augmentation of our data.

Data augmentation was applied during the training phase to increase the number of training examples. We also applied the augmentations during the testing phase, and averaged predictions across the augmented versions of the same data sample.

Network architectures

We used convolutional neural networks to learn a mapping from the extracted image patches to systolic and diastolic volumes. During the competition, we played around a lot with both minor and major architectural changes. Our base architecture for most of our models was based on VGG-16.

As we already mentioned, we trained different models which can deal with different kinds of patients. There are roughly four different kinds of models we trained: single slice models, patient models, 2Ch models and 4Ch models.

Single slice models

Single slice models are models that take a single SAX slice as an input, and try to predict the systolic and diastolic volumes directly from it. The 30 frames were fed to the network as 30 different input channels. The systolic and diastolic networks shared the convolutional layers, but the dense layers were separated. The output of the network could be either a 600-way softmax (followed by a cumulative sum), or the mean and standard deviation of a Gaussian (followed by a layer computing the cdf of the Gaussian).

Although these models obviously have too little information to make a decent volume estimation, they benefitted hugely from test-time augmentation (TTA). During TTA, the model gets slices with different augmentations, and the outputs are averaged across augmenations and slices for each patient. Although this way of aggregating over SAX slices is suboptimal, it proved to be very robust to the relative positioning of the SAX slices, and is as such applicable to all patients.

Our single best single slice model achieved a local validation score of 0.0157 (after TTA), which was a reliable estimate for the public leaderboard score for these models. The approximate architecture of the slice models is shown on the following figure.

2Ch and 4Ch models

These models have a much more global view on the left ventricle of the heart than single SAX slice models. The 2Ch models also have the advantage of being applicable to every patient. Not every patient had a 4Ch slice. We used the same VGG-inspired architecture for these models. Individually, they achieved a similar validation score (0.0156) as was achieved by averaging over multiple sax slices. By ensembling only single slice, 2Ch and 4Ch models, we were able to achieve a score of 0.0131 on the public leaderboard.

Patient models

As opposed to single slice models, patient models try to make predictions based on the entire stack of (up to 25) SAX slices. In our first approaches to these models, we tried to process each slice separately using a VGG-like single slice network, followed by feeding the results to an overarching RNN in an ordered fashion. However, these models tended to overfit badly. Our solution to this problem consists of a clever way to merge predictions from multiple slices. Instead of having the network learn how to compute the volume based on the results of the individual slices, we designed a layer which combines the areas of consecutive cross-sections of the heart using a truncated cone approximation.

Basically, the slice models have to estimate the area (and standard deviation thereof) of the cross-section of the heart in a given slice . For each pair of consecutive slices and , we estimate the volume of the heart between them as , where is the distance between the slices. The total volume is then given by .

Ordering the SAX slices and finding the distance between them was achieved through looking at the SliceLocation metadata fields, but this field was not very reliable in finding the distance between slices, neither was the SliceThickness. We looked for the two slices that were furthest apart, drew a line between them, and projected every other slice onto this line. This way, we estimated the distance between two slices ourselves.

Our best single model achieved a local validation score of 0.0105 using this approach. This was no longer a good leaderboard estimation, since our local validation set contained relatively few outliers compared to the public leaderboard in the first round. The model had the following architecture:

Layer Type Size Output shape
Input layer   (8, 25, 30, 64, 64)*
Convolution 128 filters of 3x3 (8, 25, 128, 64, 64)
Convolution 128 filters of 3x3 (8, 25, 128, 64, 64)
Max pooling   (8, 25, 128, 32, 32)
Convolution 128 filters of 3x3 (8, 25, 128, 32, 32)
Convolution 128 filters of 3x3 (8, 25, 128, 32, 32)
Max pooling   (8, 25, 128, 16, 16)
Convolution 256 filters of 3x3 (8, 25, 256, 16, 16)
Convolution 256 filters of 3x3 (8, 25, 256, 16, 16)
Convolution 256 filters of 3x3 (8, 25, 256, 16, 16)
Max pooling   (8, 25, 256, 8, 8)
Convolution 512 filters of 3x3 (8, 25, 512, 8, 8)
Convolution 512 filters of 3x3 (8, 25, 512, 8, 8)
Convolution 512 filters of 3x3 (8, 25, 512, 8, 8)
Max pooling   (8, 25, 512, 4, 4)
Convolution 512 filters of 3x3 (8, 25, 512, 4, 4)
Convolution 512 filters of 3x3 (8, 25, 512, 4, 4)
Convolution 512 filters of 3x3 (8, 25, 512, 4, 4)
Max pooling   (8, 25, 512, 2, 2)
Fully connected (S/D) 1024 units (8, 25, 1024)
Fully connected (S/D) 1024 units (8, 25, 1024)
Fully connected (S/D) 2 units (mu and sigma) (8, 25, 2)
Volume estimation (S/D)   (8, 2)
Gaussian CDF (S/D)   (8, 600)

* The first dimension is the batch size, i.e. the number of patients, the second dimension is the number of slices. If a patient had fewer slices, we padded the input and omitted the extra slices in the volume estimation.

Oftentimes, we did not train patient models from scratch. We found that initializing patient models with single slice models helps against overfitting, and severely reduces training time of the patient model.

The architecture we described above was one of the best for us. To diversify our models, some of the good things we tried include:

  • processing each frame separately, and taking the minimum and maximum at some point in the network to compute systole and diastole
  • sharing some of the dense layers between the systole and diastole networks as well
  • using discs to approximate the volume, instead of truncated cones
  • cyclic rolling layers
  • leaky RELUs
  • maxout units

One downside of the patient model approach was that these models assume that SAX slices nicely range from one end of the heart to the other. This was trivially not true for patients with very few (< 5) slices, but it was harder to detect automatically for some other outlier cases as in figure below, where something is wrong with the images or the ROI algorithm fails.

sax_12 sax_15 sax_17 sax_36 sax_37 sax_41
2Ch 4Ch

Training and ensembling

Error function. At the start of the competition, we experimented with various error functions, but we found optimising CRPS directly to work best.

Training algorithm. To train the parameters of our models, we used the Adam update rule (Kingma and Ba).

Initialization. We initialised all filters and dense layers orthogonally (Saxe et al.). Biases were initialized to small positive values to have more gradients at the lower layer in the beginning of the optimization. At the Gaussian output layers, we initialized the biases for mu and sigma such that initial predictions of the untrained network would fall in a sensible range.

Regularization. Since we had a low number of patients, we needed considerable regularization to prevent our models from overfitting. Our main approach was to augment the data and to add a considerable amount of dropout.

Validation

Since the trainset was already quite small, we kept the validation set small as well (83 patients). Despite this, our validation score remained pretty close to the leaderboard score. Also, in cases where it didn’t, it helped us identify issues in our models, namely problematic cases in the test set which were not represented in our validation set. We noticed for instance that quite some of our patient models had problems with patients with too few SAX slices (< 5).

Selectively train and predict

By looking more closely at the validation scores, we observed that most of the accumulated error was obtained by wrongly predicting only a couple of such outlier cases. At some point, being able to handle only a handful of these meant the difference between a leaderboard score of 0.0148 and 0.0132!

To mitigate such issues, we set up our framework such that each individual model could choose not to train on or predict a certain patient. For instance, models on patients’ SAX slices could choose not to predict patients with too few SAX slices, models which use the 4Ch slice would not predict for patients who don’t have this slice. We extended this idea further by developing expert models, which only trained and predicted for patients with either a small or a big heart (as determined by the ROI detection step). Further down the pipeline, our ensembling scripts would then take these non-predictions into account.

Ensembling and dealing with outliers

We ended up creating about 250 models throughout the competition. However, we knew that some of these models were not very robust to certain outliers or patients whose ROI we could not accurately detect. We came up with two different ensembling strategies that would deal with these kind of issues.

Our first ensembling technique followed the following steps:

  1. For each patient, we select the best way to average over the test time augmentations. Slice models often preferred a geometric averaging of distributions, whereas in general arithmetic averaging worked better for patient models.
  2. We average over the models by calculating each prediction’s KL-divergence from the average distribution, and the cross entropy of each single sample of the distribution. This means that models which are further away from the average distribution get more weight (since they are more certain). It also means samples of the distribution closer to the median-value of 0.5 get more weight. Each model also receives a model-specific weight, which is determined by optimizing these weights over the validation set.
  3. Since not all models predict all patients, it is possible for a model in the ensemble to not predict a certain patient. In this case, a new ensemble without these models is optimized, especially for this single patient. The method to do this is described in step 2.
  4. This ensemble is then used on every patient on the test-set. However, when a certain model’s average prediction disagrees too much with the average prediction of all models, the model is thrown out of the ensemble, and a new ensemble is optimized for this patient, as described in step 2. This meant that about ~75% of all patients received a new, ‘personalized’ ensemble.

Our second way of ensembling involves comparing an ensemble that is suboptimal, but robust to outliers, to an ensemble that is not robust to them. This approach is especially interesting, since it does not need a validation set to predict the test patients. It follows the following steps:

  1. Again, for each patient, we select the best way to average over the test time augmentations again.
  2. We combine the models by using a weighted average on the predictions, with the weights summing to one. These weights are determined by optimising them on the validation set. In case not all models provide a prediction for a certain patient, it is dropped for that patient and the weights of the other models are rescaled such that they again sum to one. This ensemble is not robust to outliers, since it contains patient models.
  3. We combine all 2Ch, 4Ch and slice models in a similar fashion. This ensemble is robust to outliers, but only contains less accurate models.
  4. We detect outliers by finding the patients where the two ensembles disagree the most. We measure disagreement using CRPS. If the CRPS exceeds a certain threshold for a patient, we assume it to be an outlier. We chose this threshold to be 0.02.
  5. We retrain the weights for the first ensemble, but omit the outliers from the validation set. We choose this ensemble to generate predictions for most of the patients, but choose the robust ensemble for the outliers.

Following this approach, we detected three outliers in the test set during phase one of the competition. Closer inspection revealed that for all of them either our ROI detection failed, or the SAX slices were not nicely distributed across the heart. Both ways of ensembling achieved similar scores on the public leaderboard. (0.0110)

Second round submissions

For the second round of the competition, we were allowed to retrain our models on the new labels (+ 200 patients). We were also allowed to plan two submissions. Of course, it was impossible to retrain all of our models during this single week. For this reason, we chose to only train our 44 best models, according to our ensembling scripts.

For our first submission, we splitted of a new validation set. The resulting models were combined using our first ensembling strategy.

For our second submission, we trained our models on the entire training set (i.e. there was no validation split). We assembled them using the second ensembling method. Since we had no validation set to optimise the weights of the ensemble, we computed the weights by training an ensemble on the models we trained with a validation split, and transferred them over.

Software and hardware

We used Lasagne, Python, Numpy and Theano to implement our solution, in combination with the cuDNN library. We also used PyCUDA for a few custom kernels. We made use of scikit-image for pre-processing and augmentation.

We trained our models on the NVIDIA GPUs that we have in the lab, which include GTX TITAN X, GTX 980, GTX 680 and Tesla K40 cards. We would like to thank Frederick Godin and Elias Vansteenkiste for lending us a few extra GPUs in the last week of the competition.

Conclusion

In this competition, we tried out different ways to preprocess data and combine information from different data sources, and thus, we learned a lot in this aspect. However, we feel that there is still a room for improvement. For example, we observed that most of our error still hails from a select group of patients. These include the ones for which our ROI extraction fails. In hindsight, hand-labeling the training data and training a network to do the ROI extraction would be a better approach, but we wanted to sidestep doing a lot of this kind of manual effort as much as possible. In the end, labeling the data would probably have been less time intensive.

UPDATE (March 23): the code is now available on GitHub: github.com/317070/kaggle-heart


          Predicting epileptic seizures      Cache   Translate Page      

The American Epilepsy Society Seizure Prediction Challenge ended a few days ago, and I finished 10th out of 504 teams. In this post I will briefly outline the problem and my solution.

The problem

Epilepsy is one of the most commonly diagnosed neurological disorders. It is characterized by the occurrence of spontaneous seizures, and to make matters even worse nearly 30% of patients cannot control their seizures with medication. In such cases, seizure forecasting systems are vitally important. Their purpose is to trigger a warning when a probability of the forthcoming seizure exceeds a predefined threshold, so patients have time to plan their activities.

As a primary challenge towards building a seizure prediction device is the ability to classify between pre-seizure and non-seizure states of the brain activity measured with the EEG. Further, we will use a term preictal to denote a pre-seizure state and interictal for a normal EEG data without any signs of seizure activity.

Preictal state in our setting was defined to be one hour before the seizure onset with a 5-minute horizon.

An example of a preictal EEG record (3 channels out of 16, picture from Kaggle)
An example of a preictal EEG record (3 channels out of 16, picture from Kaggle)

For each 10-minute clip from preictal or interictal one-hour sequence, we were asked to assign a probability that a given clip is preictal. In the training data the timing of each clip was known. However it wasn’t given for the test data clips, so we couldn’t use the timing information when building a classifier.

The evaluation metric was the area under the ROC curve (AUC) calculated for seven available subjects as one big group. It implied an additional challenge since predictions from subject-specific models in general were not well calibrated. Training a single classifier for all subject a priori could not work better because brain activity is a highly individual. The global AUC metric required additional robustness against the choice of the classification threshold, which is crucial in seizure forecasting systems.

The solution: convnets

The idea of using convnets for EEG was inspired by the research of Sander Dieleman. In his case, the audio signal was one-dimensional, unlike the EEG signal, which had 15-24 channels depending on the subject. Therefore, the information from different channels has to be combined in some way. I tried various convnets architectures, however, most success I could get when merging features from different channels in the very first layer.

The input features itself were very simple and similar to those in Howbert et al., 2014: EEG signal was partitioned into nonoverlapping 1 min frames and transformed with DFT. The resulted amplitude spectrum was averaged within 6 frequency bands: delta (0.1-4 Hz), theta (4-8 Hz), alpha (8-12 Hz), beta (12-30 Hz), low­gamma (30-70 Hz) and high­gamma (70-180 Hz). Thus, the dimension of the data clip was equal to (channels frequency bands time frames). Additionally in some models I used standard deviation of the signal computed in the same time windows as DFT.

An example of a convnet architecture
An example of a convnet architecture

Its first layer (C1) performs convolution in a time dimension over all N channels and all 6 frequency bands, so the shape of its filters is . C1 has 16 feature maps each of shape . The second layer (C2) performs convolution with 32 filters of size . C2 is followed by a global temporal pooling layer GP3, which computes the following statistics: mean, maximum, minimum, variance, geometrical mean, norm over 9 values in each feature map from C2. GP3 is fully connected with 128 units in F4 layer. C1 and C2 layers were composed of ReLUs; tanh activation was used in the hidden layer.

I used Python, Theano, NumPy and SciPy to implement the solution. Since I didn’t have a GPU, I trained everything on CPU, which usually took 2-6 hours to train all individual models for 175K gradient updates with ADADELTA.

Model averaging

As a final model I used a geometric mean of the predictions (normalized to 0-1 range for each subject) obtained from 11 convnets with different hyperparameters: number of feature maps or hidden units in each layer, strides in convolutional layers, amount of dropout and weight decay, etc. Models were also trained on different data, for example, I could take DFT in 0.5 or 2 minute windows or change the frequency bands partition by dividing large gamma bands into two. To augment the training set I took the overlaps between consecutive clips in the same one-hour sequence, regretfully I didn’t do it real-time.

Results

In the middle of the competition organizers decided to allow use of the test data to calibrate the predictions. I used the simplest form of calibration: unity-based normalization of per-subject predictions. This could improve the AUC on about 0.01. So the final ensemble, where each of 11 models was calibrated, obtained 0.81087 on public and 0.78354 on private datasets. Models averaging was essential to reduce the probability of an unpleasant surprise on the private LB. After the private scores had become available, I found out that some models with relatively high public scores had unexpectedly low private scores and vice versa. A possible explanation I will give in the next section.

Pitfalls of the EEG

In this competition one could easily overfit to the public test set, which explains a large shake up between public and private leaderboard standings. Moreover, a tricky cross-validation could track the test score with varying success, and simple validation wasn’t even supposed to work. So what is the reason for this phenomena?

The problem lies in the properties of the EEG data: clips, which are close in time are more similar to one another than to the distant clips because brain activity constantly changes. This can be showed via t-SNE. If we take all preprocessed data clips (train and test) from one subject, transform them with PCA into vectors with 50 components and take a few sequences it would look like the plot below. Here we can see three clusters of six 10-minute clips (with some outliers), which correspond to different preictal EEG hours.

t-SNE visualization of the preictal clips from 3 one-hour sequences.
t-SNE visualization of the preictal clips from 3 one-hour sequences.

Obviously, if the classifier is trained on 5 clips from a green sequence, it is easier to make a correct predictions for the 6th green clip rather than for yellow or red clip from another sequence. Sadly, but in many papers I’ve seen on seizure prediction, this fact is oftenly neglected. Many authors report cross-validation or validation results, which are unlikely to be as optimistic at test time, when the device is actually implanted in one’s brain.

Conclusions

This competition outlined a lot of open questions in the field of seizure prediction. These problems are solvable, so epilepsy patients hopefully could get reliable seizure advisory systems in the nearest future.

The competition was a basis for my thesis, which can be found in my GitHub repository along with its code.


          fix python script/linux error of decoding a h264 video stream      Cache   Translate Page      
I have a computer vision script in Python that analyze images from network cameras. After a few hours running, the script starts outputting the following error: error while decoding MB 0 14, bytestream... (Budget: $10 - $30 USD, Jobs: Linux, Python, Software Architecture, Ubuntu)
          fix python script/linux error of decoding a h264 video stream      Cache   Translate Page      
I have a computer vision script in Python that analyze images from network cameras. After a few hours running, the script starts outputting the following error: error while decoding MB 0 14, bytestream... (Budget: $10 - $30 USD, Jobs: Linux, Python, Software Architecture, Ubuntu)
          I would like to hire a Face Recognition Expert - 12/09/2018 18:00 EDT      Cache   Translate Page      
I run a little business in photo service. We sell photo at local events ( sports , dancing etc ). At the moment I use Jalbum to create static gallery , a laptop as server , and some pc as clients. People... (Budget: €30 - €250 EUR, Jobs: Face Recognition, Javascript, PHP, Python, Web Development)
          I would like to hire a Face Recognition Expert - 12/09/2018 18:00 EDT      Cache   Translate Page      
I run a little business in photo service. We sell photo at local events ( sports , dancing etc ). At the moment I use Jalbum to create static gallery , a laptop as server , and some pc as clients. People... (Budget: €30 - €250 EUR, Jobs: Face Recognition, Javascript, PHP, Python, Web Development)
          Hands-On Unsupervised Learning with Python [Video]      Cache   Translate Page      
none
          Full Stack Developer / Développeur d’application par pile complète - Explorance - Montréal, QC      Cache   Translate Page      
À l’arrière-plan, vous développerez des applications, des serveurs et des bases de données qui représentent la structure de base avec C#, ASP.NET, Python et SQL...
From Explorance - Tue, 04 Sep 2018 20:56:02 GMT - View all Montréal, QC jobs
          9/13/2018: NEWS: Unusual end of scale      Cache   Translate Page      

Scrub python mad as a cut snake after being hit by a chainsaw. cairnspost.com.au
          9/13/2018: NEWS: This python’s injury at more unusual end of the scale      Cache   Translate Page      

danaella.wivell@news.com.au THIS scrub python is as mad as a cut snake. An attempt to move the reptile from a Redlynch cubby house went horribly wrong when the snake was hit by a chainsaw. FNQ Exotic Haven coowner Amanda Milligan was called to the...
          WINE: GIVE BOOZE A CHANCE/BONZO DOG DOO-DAH BAND      Cache   Translate Page      

I suspect the Bonzo's didn't really translate that well across the pond to the USA, that second of two nations divided by a single language. Indeed, apart from a cult of ageing sex- and septuagenarians, I've never been sure whether they actually meant so much over here, their one slab of chart action being produced by one Paul McCartney, often mistakenly thought to be also written by him. But this glorious parody never fails to make me smile, even as I repeatedly play it to my bemused, and unamused, wife. Nonetheless, it seems, in a fortnight of songs about wine, perhaps to give a thought about the downside. And this is probably best appreciated within the context that Vivian Stanshall, the singer and frontman of this anarchic ensemble, died arguably not indirectly from his giving booze one hell of a chance, in a fire on his houseboat, in no small part contributed to his prodigious alcohol consumption. Details vary.


The Bonzo Dog Doo-Dah Band had no right being a success, but, in a small way, a success they were. Formed in about 1962, the aim was to provide a dada-ist counterpoint to the then trad-jazz revival of its day, which involved the deadpan recreation of the often absurd and very english approach to dixieland. Take "Hunting Tigers out in Indiah", which might be considered one such example, their recreation of an original song dating from the '30s. By this stage they were in transition from an all brass and banjo line-up, beginning to introduce more conventional, for the '60s, instrumentation. Helped in no small part by a resident spot on children's TV programme, "Please Do Not Adjust Your Set", they gradually morphed into a slightly broader parodic approach to all styles of popular music. Should you pursue the clip, it shows also the nascent beginnings of "Monty Python's Flying Circus", with all but John Cleese present. I was a child of that time, and took to this nonsense with ease, able to effortlessly assimilate the Bonzo's into my developing musical palette. (And no surprise that, as I grew older, so could become the peak audience for Python.)

Come 1967 and they had formally ditched most of the jazz, and many of the members who had played that style. A further boost was when Paul McCartney secured them a place in the Beatles' "Magical Mystery Tour". Here's some outtakes of their included song "Death Cab For Cutie", should you wonder where Ben Gibbard came up with the name. It was still hardly rock and roll in any conventional sense, they somehow still finding favour on the gig circuit and, especially, at festivals. They did manage one tour to the US, as support to the Kinks, themselves perhaps on a revers journey from rock to vaudeville. It was a disaster. They lurched on for a few more tours and a few more albums, shedding members, yet retaining a hard core of Stanshall, later Rutles mastermind Neil Innes and tap-dancing drummer, "Legs" Larry Smith, alongside saxophonists Rodney Slater and Roger Ruskin Spear, inventor of ever more Heath Robertson wind instruments. Acclaim did not, however, translate into sales.


After dissolution in 1970 they had a couple of short-lived reunions. Stanshall descended into ever more eccentric behaviours, ricocheting between drunken japes with Keith Moon and writing a body of works around fictitious upper class gentleman explorer, Sir Henry Rawlinson: "Sir Henry at Rawlinson End", initially as spoken word projects, later a series of books and a radio piece, even a film, with Trevor Howard, no less!. He died in 1995. Innes, ahead of the Rutles, went on to be the resident musical go-to for the Monty Python team, often appearing in their live concerts. Between 2006-8 a final official reunion took place, with Stanshall replaced by comedians/actors Phill Jupitus and Ade Edmondson, playing to nostalgia hungry audiences of (very) ageing schoolboys.

I confess I always found the band to be a better idea than a reality. You probably had to be there, recorded material having dated dreadfully. But for that idea I am grateful. And, for a short time, back in the dim and distant, glad also that I was there. Surely that is worth raising a glass to.

Give booze a chance.



          Python A-Z™: Python For Data Science With Real Exercises      Cache   Translate Page      
Python A-Z™: Python For Data Science With Real Exercises
Python A-Z�: Python For Data Science With Real Exercises
MP4 | Video: AVC 1280x720 | Audio: AAC 44KHz 2ch | Duration: 11 Hours | Lec: 69 | 2.18 GB
Genre: eLearning | Language: English

Programming In Python For Data Analytics And Data Science. Learn Statistical Analysis, Data Mining And Visualization

There are lots of Python courses and lectures out there. However, Python has a very steep learning curve and students often get overwhelmed. This course is different!


          fix python script/linux error of decoding a h264 video stream      Cache   Translate Page      
I have a computer vision script in Python that analyze images from network cameras. After a few hours running, the script starts outputting the following error: error while decoding MB 0 14, bytestream... (Budget: $10 - $30 USD, Jobs: Linux, Python, Software Architecture, Ubuntu)
          I would like to hire a Face Recognition Expert - 12/09/2018 18:00 EDT      Cache   Translate Page      
I run a little business in photo service. We sell photo at local events ( sports , dancing etc ). At the moment I use Jalbum to create static gallery , a laptop as server , and some pc as clients. People... (Budget: €30 - €250 EUR, Jobs: Face Recognition, Javascript, PHP, Python, Web Development)
          Jazkarta Blog: Announcing collective.siteimprove      Cache   Translate Page      

Screenshot of collective.siteimprove UI, expanded

As I reported back in May, at our last Jazkarta sprint Witek and Alec began work on a Plone add-on, collective.siteimprove, that provides integration with the Siteimprove content quality checking service. I’m pleased to announce that the add-on has now been thoroughly vetted by the folks at Siteimprove and the resulting version is available from Pypi!

What Is Siteimprove

Siteimprove is a respected service for maintaining and improving web content quality. Customers who sign up for the service get automated scans of their websites which check for content quality, accessibility compliance, SEO, data privacy, and performance. Site rollups and per page reports are available via email and from a customizable dashboard at siteimprove.com.

Siteimprove also provides an API which allows for the development of CMS plugins – integrations of the Siteimprove service within content management systems. This allows content editors to get immediate feedback on pages that they publish. This is great because it lets editors see problems while they are in the process of editing a page, instead of getting a report after the fact and needing to click through links to fix things.

Graphic explaining how Siteimprove works

 

Why Siteimprove for Plone

Plone, the premier Python-based open source content management system, is an enterprise scale CMS that is widely used by large organizations with large websites. These are just the types of organizations that can benefit from a tool like Siteimprove, which has a reputation for being an excellent service for maintaining and improving website content.

The Jazkarta team was delighted to be able to contribute to the Plone community by creating an add-on that integrates Siteimprove’s CMS plugin features into the Plone editing process. Now anyone with a Plone website can easily integrate with Siteimprove simply by installing an add-on – all the integration work has been done.

collective.siteimprove

After collective.siteimprove is installed on a Plone site, there will be a new control panel where Siteimprove customers can request and save a token that registers the domain with siteimprove.com. After that, authorized users will see an overlaid Siteimprove button on each page that shows the number of issues found.

Screenshot of the collective.siteimprove UI, collapsed

 

When clicked, the overlay expands to show a summary report of page errors and an overall score, as shown in the image at the top of this post. After an edit, users can click a button on the overlay to request that Siteimprove recheck the page. They can also follow a link to the full page report at siteimprove.com.

Plone+Siteimprove FTW

Now anyone who has a Plone website can easily integrate with the Siteimprove service and take advantage of all of Siteimprove’s enterprise-scale features while they are working on their content!


          Need to Full-stack Python/Django Developer      Cache   Translate Page      
Our team of the Senior developer is looking for a teammate: full-stack Python/Django Developer (Middle+) This is a remote job offer for a full-time, long-term employment. Professional and friendly team!... (Budget: $8 - $15 USD, Jobs: Angular.js, Django, Javascript, NoSQL Couch & Mongo, Python)
          An Adorable Albino      Cache   Translate Page      
A beautiful albino ball python poses for the camera in these nice snapshots.
          MSU: Olive and Macklot’s Pythons      Cache   Translate Page      
Dave Palumbo interviews Dennis McNamara and learns the tricks of the trade to successfully breed Australian olive pythons. Plus, check out the super cool Macklot’s python (Liasis mackloti).
          THE GONZO BLOG DOO-DAH MAN PRAYS      Cache   Translate Page      
The Gonzo Daily: Tuesday/Wednesday
https://gonzo-multimedia.blogspot.com/
 
YER EDITOR SEZ:
 
Something serious has come up and so I am doing tomorrow's blogs tonight.
 
ALL TODAY'S GONZO NEWS WOT'S FIT TO PRINT:
 
http://gonzo-multimedia.blogspot.com/2018/09/thom-world-poet-daily-poem_11.html
THOM THE WORLD POET: The Daily Poem
http://gonzo-multimedia.blogspot.com/2018/09/chris-squire-tribute-album.html
CHRIS SQUIRE TRIBUTE ALBUM
http://gonzo-multimedia.blogspot.com/2018/09/zappastuff_11.html
ZAPPASTUFF
http://gonzo-multimedia.blogspot.com/2018/09/yes-featuring-arw-in-news_11.html
YES FEATURING ARW IN THE NEWS
 
OTHER IMPORTANT STUFF FROM THE GONZOVERSE:
 
For those of you who are interested in such things, the Gonzo Privacy Policy is here:
http://gonzo-multimedia.blogspot.com/p/privacy-policy-this-privacy-policy.html
 
And the CFZ Privacy Policy is here:
http://tinyurl.com/y7b9rlog
And, yes,
http://forteanzoology.blogspot.com/p/privacy-policy-this-privacy-policy.html
 
CHECK OUT THE GONZO STORES:
 
UK
http://www.gonzomultimedia.co.uk/
US
http://www.gonzomultimedia.com
 
AND OTHER STUFF FEATURING VARIOUS GONZO CONTRIBUTORS:
 
Our webTV show:
https://www.facebook.com/OnTheTrack/posts/632561497117057
 
And if you fancy supporting it on Patreon:
https://www.patreon.com/CFZ
 
And by the way chaps and chappesses, a trip to the Jon Downes megastore may seem to be in order:
https://gonzotesting.blogspot.com/2018/07/a-notion-of-shopkeepers.html
 
AND THE LATEST ISSUE OF THE GONZO MAGAZINE:
 
Gonzo Weekly #301-2
THE PEACE AND LOVE ISSUE
http://www.gonzoweekly.com/
 
In another fantastically fab issue, Richard meets Ringo Starr and goes into Therapy, Jon examines the autobiography of Cosey from Throbbing Gristle, John goes analogue, Doug goes to see Erasure, and Chris and Graham independently mention Eric Clapton, Alan gets the blues from Toby Mottershead, Raz goes to see Yes feat. ARW, Graham muses on Hawkwind and Arthur Brown, and Jon gets all apocalyptic and then releases an album.
#Hail Eris!
 
And there are radio shows from Strange Fruit, Mack Maloney, AND Friday Night Progressive, AND there are columns from all sorts of folk including Kev Rowlands, Neil Nixon, C J Stone, AND Roy Weard and the irrepressible Corinna AND Mr Biffo are back on board.  There is also a collection of more news, reviews, views, interviews and red kangaroos who've lost their shoes (OK, nothing to do with the largest extant macropods who are in a quandry with regards their footwear, but I got carried away with things that rhymed with OOOOS) than you can shake a stick at. And the best part is IT's ABSOLUTELY FREE!!!
 
This issue features:
 
Ken Worthington, John Shuttleworth, Apple Records, Ed Sheeran, Aretha Franklin, Neil Young, Daryl Hannah, Michael Jackson, Michael Raz Rescigno, YES featuring Anderson, Rabin & Wakeman, Bart Lancia, Peter Gabriel, Richard Freeman, Strange Fruit, Friday Night Progressive, Canterbury Sans Frontieres, Mack Maloney's Mystery Hour, Lindsay Keith Kemp, Anthony Toby Hiller, Marvin Neil Simon, Edward Calhoun King, Leslie Carswell Johnson, Spencer Patrick Jones, Turgut Berkes, Khaira Arby, Jack Constanzo, Danny Pearson, Jill Janus, Collins Leysath "DJ Ready Red", Kyle Pavone, Elliot "Ellie" Mannette, Rick Wakeman, Michael Bruce, Natural Gas, Richard Stellar, Ringo Starr, Doug Harr, Erasure, Alan Dearling, Toby Mottershead, The Barrels Ale House, John Brodie Good, sound systems, Analogue, Stephen Fearing, Kev Rowland, Arena, Black Foxxes, Black Moth, Bob Arthurs & Steve Lamattina, The Bonnevilles, Bulletboys,Crosson, Roo, Mr Biffo, Roy Weard, Chris Stone, Eric Clapton, Hawkwind, Arthur Brown, Jon Downes, The Wild Colonial Boy, Martin Springett, Cosey Fanni Tutti, Paul McCartney, Jimi Hendrix, Jim Morrison, Janis Joplin, Kurt Cobain, Carlos Santana, Elvis, Neil Nixon, Diamanda Galas.
 
And the last few issues are:
 
Issue 301-2 (Ringo Starr)
https://www.flipsnack.com/9FE5CEE9E8C/gonzo-301-2.html
Issue 299-300 (Aretha Franklin)
https://www.flipsnack.com/9FE5CEE9E8C/gonzo-299-30.html
Issue 298 (Alan in Hungary)
http://www.flipsnack.com/9FE5CEE9E8C/gonzo297.html
Issue 297 (Shir Ordo)
http://www.flipsnack.com/9FE5CEE9E8C/gonzo297.html
Issue 295-6 (Robert Berry)
http://www.flipsnack.com/9FE5CEE9E8C/gonzo295-6.html
Issue 294 (Bow Wow Wow)
http://www.flipsnack.com/9FE5CEE9E8C/gonzo294.html
Issue 293 (Stonehenge)
http://www.flipsnack.com/9FE5CEE9E8C/gonzo293.html
Issue 292 (Rolling Stones)
http://www.flipsnack.com/9FE5CEE9E8C/gonzo292.html
Issue 291 (Alien Weaponry)
http://www.flipsnack.com/9FE5CEE9E8C/gonzo291.html
Issue 290 (Frank Zappa)
http://www.flipsnack.com/9FE5CEE9E8C/gonzo290.html
Issue 289 (Misty in Roots)
http://www.flipsnack.com/9FE5CEE9E8C/gonzo289.html
Issue 288 (Paula Frazer)
http://www.flipsnack.com/9FE5CEE9E8C/gonzo288.html
Issue 287 (Boss Goodman)
http://www.flipsnack.com/9FE5CEE9E8C/gonzo287.html
Issue 286 (Monty Python)
http://www.flipsnack.com/9FE5CEE9E8C/gonzo286.html
Issue 285 (ELP)
http://www.flipsnack.com/9FE5CEE9E8C/gonzo285.html
Issue 284 (Straqngelove)
http://www.flipsnack.com/9FE5CEE9E8C/gonzo284.html
Issue 283 (Record Store Day)
http://www.flipsnack.com/9FE5CEE9E8C/gonzo283.html
Issue 282 (Neil Finn and Fleetwood Mac)
http://www.flipsnack.com/9FE5CEE9E8C/gonzo282.html
Issue 281 (Carl Palmer)
http://www.flipsnack.com/9FE5CEE9E8C/gonzo281.html
Issue 280 (Steve Andrews)
 
All issues from #70 can be downloaded at www.gonzoweekly.com if you prefer. If you have problems downloading, just email me and I will add you to the Gonzo Weekly dropbox. The first 69 issues are archived there as well. Information is power chaps, we have to share it!
 
You can download the magazine in pdf form HERE:
http://www.gonzoweekly.com/pdf/
 
SPECIAL NOTICE: If you, too, want to unleash the power of your inner rock journalist, and want to join a rapidly growing band of likewise minded weirdos please email me at jon@eclipse.co.uk The more the merrier really.
 
 
* The Gonzo Daily is a two way process. If you have any news or want to write for us, please contact me at jon@eclipse.co.uk. If you are an artist and want to showcase your work, or even just say hello please write to me at gonzo@cfz.org.uk. Please copy, paste and spread the word about this magazine as widely as possible. We need people to read us in order to grow, and as soon as it is viable we shall be invading more traditional magaziney areas. Join in the fun, spread the word, and maybe if we all chant loud enough we CAN stop it raining. See you tomorrow...
 
* The Gonzo Daily is - as the name implies - a daily online magazine (mostly) about artists connected to the Gonzo Multimedia group of companies. But it also has other stuff as and when the editor feels like it. The same team also do a weekly newsletter called - imaginatively - The Gonzo Weekly. Find out about it at this link: www.gonzo-multimedia.blogspot.co.uk
 
* We should probably mention here, that some of our posts are links to things we have found on the internet that we think are of interest. We are not responsible for spelling or factual errors in other people's websites. Honest guv!
 

* Jon Downes, the Editor of all these ventures (and several others) is an old hippy of 58 who - together with a Jack Russell called Archie, an infantile orange cat named after a song by Frank Zappa, and two half grown kittens, one totally coincidentally named after one of the Manson Family, purely because she squeaks, puts it all together from a converted potato shed in a tumbledown cottage deep in rural Devon which he shares with various fish. He is ably assisted by his lovely wife Corinna, his bulldog/boxer Prudence, his elderly mother-in-law, and a motley collection of social malcontents. Plus.. did we mention Archie and the Cats?

          Python Related Questions      Cache   Translate Page      
Write code for the following with comments. I need this done in 15 minutes. 1. Write Python code which sorts a list of 5 strings by increasing length. 2. Write Python code which Sort a list of 5 strings by increasing length, and so that strings of the same length are sorted lexicographically... (Budget: $10 - $30 USD, Jobs: Python, Software Architecture)
          Software Developer - PHP, Python, Perl, Linux - Fibernetics - Cambridge, ON      Cache   Translate Page      
_Fibernetics prides itself on being a disruptive force in the Telecommunications industry and its this drive that fuels our innovation to deliver high tech,...
From Indeed - Tue, 24 Jul 2018 14:39:08 GMT - View all Cambridge, ON jobs
          Snake Fight in the Attic      Cache   Translate Page      


A snake catcher in Australia is a professional who lives our worst nightmares every day. On Monday, snake catcher Max Jackson was called to investigate an attic space in Mount Coolum, Queensland. What he found were two 8-foot carpet pythons wrestling. He captured those two, but since they were fighting, there's most likely a female nearby -and they are usually bigger! (via Digg)


          Snaktbyte: Python 3300S - Stereo Gaming      Cache   Translate Page      

Snaktbyte: Python 3300S - Stereo Gaming Produkt: Snaktbyte: Python 3300S - Stereo Gaming HeadsetTyp: Game - PCBeschreibung:PC Stereo-Headset...
CHF 25.90


          Software Engineer that will be core member to a Fintech SME Lender by Uprise Credit      Cache   Translate Page      
Software Engineers will be the core of our online financing solution. We’re looking for someone passionate about building new applications from the ground up to develop an innovative financing solution for new economy entrepreneurs in Asia that are underserved by banks. Here at Uprise, you’ll work closely with your peers in a flat structure to share responsibilities. The technology team should move fast, celebrate great ideas, inspire testing and learning, and stretch for new solutions. This pivotal role is responsible for building the online SME lending platform from top to bottom, collaborating directly and frequently with the internal team, outsourced developers, and third party strategic partners to co-create solutions for merchant customers. The ultimate users of the product are currently targeted at online merchants who use certain types of online payment gateways. We expect to expand the user-base to offline merchants as well through partnership with other of e-payment solution providers. Requirements - 3+ years solid software development experience - Have expert knowledge in one or more languages such as Javascript, Python, Ruby, Java, C#, Golang - Knowledge in one or more popular framework such as Rails, React, Play, NodeJS. - Familiarity with databases, e.g. MySQL / MongoDB / PostgreSQL / Redis.Knowledge and experience in building online / offline payment gateway or financing solutions - Demonstrated interest and passionate in bridging the financing gap faced by SMEs and entrepreneurs in Asia - Proficient in English and Chinese (either Mandarin or Cantonese) - Must be willing to occasionally travel to Hong Kong, Singapore, Taiwan and other Southeast Asian countries.
          Software Development Senior Engineer - Seagate Technology - Singapore      Cache   Translate Page      
1. Design and develop test software in C/C++/Python for hard-disk firmware verification under Linux and Windows. 2. Maintain and enhance existing software...
From Seagate Technology - Tue, 11 Sep 2018 07:48:16 GMT - View all Singapore jobs
          API Developer      Cache   Translate Page      
CA-Walnut Creek, RESPONSIBILITIES: A Kforce client in Walnut Creek, California (CA) is looking for an API Developer to join their team! This is a contract to hire opportunity. All candidates must be able to convert without visa sponsorship needed. No remote work, needs to be in office 5 days a week. REQUIREMENTS: 3-5 years of experience Need an API developer Needs SOAP/REST based services, scripting in python or J
          Electrical Engineer - Apollo Technical LLC - Fort Worth, TX      Cache   Translate Page      
Computer languages, supporting several microcontroller languages including (machine code, Arduino, .NET, ATMEL, Python, PASCAL, C++, Ladder, Function Block)....
From Apollo Technical LLC - Thu, 02 Aug 2018 18:22:07 GMT - View all Fort Worth, TX jobs
          Arcsight Delivery Quality Assurance Resource Engineer, Network Security at Ecscorp Resources      Cache   Translate Page      
Ecscorp Resources is a solution engineering firm, established in the year 2001 with a cumulative of over 100 years experience. Our business is driven by passion and the spirit of friendliness; we harness the power of creativity and technology to drive innovation and deliver cutting-edge solutions to increase productivity. Our passion, experience, expertise and shared knowledge have forged us into a formidable catalyst for desirable, sustainable change and incessant growth. We strive to provide achievable solutions that efficiently and measurably support goal-focused business priorities and objectives.Duration: 3 months Detailed Description ArcSight division, is a leading global provider of Compliance and Security Management solutions that protect enterprises, education and governmental agencies. ArcSight helps customers comply with corporate and regulatory policy, safeguard their assets and processes and control risk. The ArcSight platform collects and correlates user activity and event data across the enterprise so that businesses can rapidly identify, prioritize and respond to compliance violations, policy breaches, cybersecurity attacks, and insider threats. The successful candidate for this position will work on the ArcSight R&D team. This is a hands-on position that will require the candidate to work with data collected from various network devices in combination with the various ArcSight product lines in order to deliver content that will help address the needs of all of ArcSight's customers. The ideal candidate will have a good understanding of enterprise security coupled with hands-on networking and security skills as well as an ability to write and understand scripting languages such as Perl, Python. Research, analyze and understand log sources, particularly from various devices in an enterprise network Appropriately categorize the security messages generated by various sources into the multi-dimensional ArcSight Normalization schema Write and modify scripts to parse out messages and interface with the ArcSight categorization database Work on content and vulnerability update releases Write scripts and automation to optimize various processes involved Understand content for ArcSight ESM, including correlation rules, dashboards, reports, visualizations, etc. Understand requirements to write content to address use cases based on customer requests and feedback Assist in building comprehensive, correct and useful ArcSight Connector and ESM content to ArcSight customers on schedule. Requirements Excellent knowledge of IT operations, administration and security Hands-on experience of a variety of different networking and security devices, such as Firewalls, Routers, IDS/IPS etc. Ability to examine operational and security logs generated by networking and security devices, identify the meaning and severity of them Understand different logging mechanisms, standards and formats Very strong practical Linux-based and Windows-based system administration skills Strong scripting skills using languages (Shell, Perl, Python etc), and Regex Hands-on experience of database such as MySQL Knowledge of Security Information Management solution such as ArcSight ESM Experience with a version control system (Perforce, GitHub) Advanced experience with Microsoft Excel Excellent written and verbal communication skills Must possess ability and desire to learn new technologies quickly while remaining detailed oriented Strong analytical skill and problem solving skills, multi-tasking. Pluses: Network device or Security certification (CISSP, CEH etc) Experience with application server such as Apache Tomcat Work experience in security operation center (SOC).
          PRACTICAL SQL 2017 Online Training (Delhi)      Cache   Translate Page      
SQL School is one of the best training institutes for Microsoft SQL Server Developer Training, SQL DBA Training, MSBI Training, Power BI Training, Azure Training, Data Science Training, Python Training, Hadoop Training, Tableau Training, Machine Learning ...
          These Killer Snakes Are Terrorizing Southern Florida—and I Caught One of Them      Cache   Translate Page      

Pythons are wreaking havoc on Florida wildlife. One reporter learns firsthand what’s being done about it.

The post These Killer Snakes Are Terrorizing Southern Florida—and I Caught One of Them appeared first on Reader's Digest.


          xlib-extra/pyqt5-python3-5.11.2-2-x86_64      Cache   Translate Page      
PyQt5 is a set of Python 3.x bindings for the Qt5 toolkit.
          xlib-extra/pyqt5-common-5.11.2-2-x86_64      Cache   Translate Page      
Common PyQt5 files shared between pyqt5 and pyqt5-python3
          xlib-extra/pyqt5-5.11.2-2-x86_64      Cache   Translate Page      
PyQt5 is a set of Python 2.x bindings for the Qt5 toolkit.
          xlib-extra/sip-4.19.12-1-x86_64      Cache   Translate Page      
Python 2.x SIP bindings for C and C++ libraries
          xlib-extra/sip-python3-4.19.12-1-x86_64      Cache   Translate Page      
Python 3.x SIP bindings for C and C++ libraries
          xlib-extra/sip-tool-4.19.12-1-x86_64      Cache   Translate Page      
A tool that makes it easy to create Python bindings for C and C++ libraries
          #10: Learning Python: Powerful Object-Oriented Programming      Cache   Translate Page      
Learning Python
Learning Python: Powerful Object-Oriented Programming
Mark Lutz
(27)

Buy new: CDN$ 84.05 CDN$ 49.65
44 used & new from CDN$ 49.65

(Visit the Bestsellers in Web Development list for authoritative information on this product's current rank.)
          Database Engineer      Cache   Translate Page      
CA-Sunnyvale, Requirements: 5 years of Database Engineer experience Oracle Database (SQL queries, Architecture) noSQL - Cassandra, Couchbase, mongoDB (architecture/operations) RDBMS: Oracle Able to do database technology evaluation/POCs Experience on Automation and scripting (shell scripts/python) Automate database related monitoring/Operations and repetitive tasks. Build Tools Write APIs if required
          Softwareentwickler/in Python/MySQL - LIMETEC Biotechnologies GmbH - Hennigsdorf      Cache   Translate Page      
Erfolgreich abgeschlossenes Hochschulstudium in Informatik oder einem verwandten Studiengang, alternativ erfolgreich abgeschlossene Berufsausbildung zum/-r...
Gefunden bei LIMETEC Biotechnologies GmbH - Thu, 31 May 2018 10:03:30 GMT - Zeige alle Hennigsdorf Jobs
          Projekt-Nr. 52451 - Python Entwickler (m/w)      Cache   Translate Page      
Aktuell sind wir auf der Suche nach einem erfahrenen Python Entwickler (m/w) für die Automatisierung von Bereich Preprocessing für FEM.

Zu Ihren Aufgaben gehören.

+ Analyse der Anforderungen aus dem Fachbereich für die Automatisierung im Bereich preprocessing für FEM
+ Technisches Konzept erstellen
+ Umsetzung der Automatisierung für Python

Die Auslastung bei diesem Projekt beträgt 20-40 %..

Anforderungen:
+ Python
+ Automobilbranche (nice)

Zusätzliche Informationen:
Konnten wir Ihr Interesse wecken? Dann freuen wir uns auf die Zusendung Ihres aussagekräftigen Qualifikationsprofils unter Angabe Ihrer Stundensatzvorstellung.

Projekt-Nr.:
52451

Stellentyp:
freiberuflich

Einsatzort:
D7, Raum Stuttgart

Start:
asap

Dauer:
3 Monate +
          值得看|30道Redis面试题,面试官能问的都被我找到了      Cache   Translate Page      

值得看|30道Redis面试题,面试官能问的都被我找到了
1、什么是Redis?简述它的优缺点?

Redis本质上是一个Key-Value类型的内存数据库,很像memcached,整个数据库统统加载在内存当中进行操作,定期通过异步操作把数据库数据flush到硬盘上进行保存。

因为是纯内存操作,Redis的性能非常出色,每秒可以处理超过 10万次读写操作,是已知性能最快的Key-Value DB。

Redis的出色之处不仅仅是性能,Redis最大的魅力是支持保存多种数据结构,此外单个value的最大限制是1GB,不像 memcached只能保存1MB的数据,因此Redis可以用来实现很多有用的功能。

比方说用他的List来做FIFO双向链表,实现一个轻量级的高性 能消息队列服务,用他的Set可以做高性能的tag系统等等。

另外Redis也可以对存入的Key-Value设置expire时间,因此也可以被当作一 个功能加强版的memcached来用。 Redis的主要缺点是数据库容量受到物理内存的限制,不能用作海量数据的高性能读写,因此Redis适合的场景主要局限在较小数据量的高性能操作和运算上。

2、Redis相比memcached有哪些优势?

(1) memcached所有的值均是简单的字符串,redis作为其替代者,支持更为丰富的数据类型

(2) redis的速度比memcached快很多

(3) redis可以持久化其数据

3、Redis支持哪几种数据类型?

String、List、Set、Sorted Set、hashes

4、Redis主要消耗什么物理资源?

内存。

5、Redis的全称是什么?

Remote Dictionary Server。

6、Redis有哪几种数据淘汰策略?

noeviction:返回错误当内存限制达到并且客户端尝试执行会让更多内存被使用的命令(大部分的写入指令,但DEL和几个例外)

allkeys-lru: 尝试回收最少使用的键(LRU),使得新添加的数据有空间存放。

volatile-lru: 尝试回收最少使用的键(LRU),但仅限于在过期集合的键,使得新添加的数据有空间存放。

allkeys-random: 回收随机的键使得新添加的数据有空间存放。

volatile-random: 回收随机的键使得新添加的数据有空间存放,但仅限于在过期集合的键。

volatile-ttl: 回收在过期集合的键,并且优先回收存活时间(TTL)较短的键,使得新添加的数据有空间存放。

7、Redis官方为什么不提供windows版本?

因为目前linux版本已经相当稳定,而且用户量很大,无需开发windows版本,反而会带来兼容性等问题。

8、一个字符串类型的值能存储最大容量是多少?

512M

9、为什么Redis需要把所有数据放到内存中?

Redis为了达到最快的读写速度将数据都读到内存中,并通过异步的方式将数据写入磁盘。

所以redis具有快速和数据持久化的特征。如果不将数据放在内存中,磁盘I/O速度为严重影响redis的性能。

在内存越来越便宜的今天,redis将会越来越受欢迎。 如果设置了最大使用的内存,则数据已有记录数达到内存限值后不能继续插入新值。

10、Redis集群方案应该怎么做?都有哪些方案?

1.codis。

目前用的最多的集群方案,基本和twemproxy一致的效果,但它支持在 节点数量改变情况下,旧节点数据可恢复到新hash节点。

2.redis cluster3.0自带的集群,特点在于他的分布式算法不是一致性hash,而是hash槽的概念,以及自身支持节点设置从节点。具体看官方文档介绍。

3.在业务代码层实现,起几个毫无关联的redis实例,在代码层,对key 进行hash计算,然后去对应的redis实例操作数据。 这种方式对hash层代码要求比较高,考虑部分包括,节点失效后的替代算法方案,数据震荡后的自动脚本恢复,实例的监控,等等。

11、Redis集群方案什么情况下会导致整个集群不可用?

有A,B,C三个节点的集群,在没有复制模型的情况下,如果节点B失败了,那么整个集群就会以为缺少5501-11000这个范围的槽而不可用。

12、mysql里有2000w数据,redis中只存20w的数据,如何保证redis中的数据都是热点数据?

redis内存数据集大小上升到一定大小的时候,就会施行数据淘汰策略。

13、Redis有哪些适合的场景?

(1)会话缓存(Session Cache)

最常用的一种使用Redis的情景是会话缓存(session cache)。用Redis缓存会话比其他存储(如Memcached)的优势在于:Redis提供持久化。当维护一个不是严格要求一致性的缓存时,如果用户的购物车信息全部丢失,大部分人都会不高兴的,现在,他们还会这样吗?

幸运的是,随着 Redis 这些年的改进,很容易找到怎么恰当的使用Redis来缓存会话的文档。甚至广为人知的商业平台Magento也提供Redis的插件。

(2)全页缓存(FPC)

除基本的会话token之外,Redis还提供很简便的FPC平台。回到一致性问题,即使重启了Redis实例,因为有磁盘的持久化,用户也不会看到页面加载速度的下降,这是一个极大改进,类似php本地FPC。

再次以Magento为例,Magento提供一个插件来使用Redis作为全页缓存后端。

此外,对WordPress的用户来说,Pantheon有一个非常好的插件 wp-redis,这个插件能帮助你以最快速度加载你曾浏览过的页面。

(3)队列

Reids在内存存储引擎领域的一大优点是提供 list 和 set 操作,这使得Redis能作为一个很好的消息队列平台来使用。Redis作为队列使用的操作,就类似于本地程序语言(如python)对 list 的 push/pop 操作。

如果你快速的在Google中搜索“Redis queues”,你马上就能找到大量的开源项目,这些项目的目的就是利用Redis创建非常好的后端工具,以满足各种队列需求。例如,Celery有一个后台就是使用Redis作为broker,你可以从这里去查看。

(4)排行榜/计数器

Redis在内存中对数字进行递增或递减的操作实现的非常好。集合(Set)和有序集合(Sorted Set)也使得我们在执行这些操作的时候变的非常简单,Redis只是正好提供了这两种数据结构。

所以,我们要从排序集合中获取到排名最靠前的10个用户 我们称之为“user_scores”,我们只需要像下面一样执行即可:

当然,这是假定你是根据你用户的分数做递增的排序。如果你想返回用户及用户的分数,你需要这样执行:

ZRANGE user_scores 0 10 WITHSCORES

Agora Games就是一个很好的例子,用Ruby实现的,它的排行榜就是使用Redis来存储数据的,你可以在这里看到。

(5)发布/订阅

最后(但肯定不是最不重要的)是Redis的发布/订阅功能。发布/订阅的使用场景确实非常多。我已看见人们在社交网络连接中使用,还可作为基于发布/订阅的脚本触发器,甚至用Redis的发布/订阅功能来建立聊天系统!

14、Redis支持的Java客户端都有哪些?官方推荐用哪个?

Redisson、Jedis、lettuce等等,官方推荐使用Redisson。

15、Redis和Redisson有什么关系?

Redisson是一个高级的分布式协调Redis客服端,能帮助用户在分布式环境中轻松实现一些Java的对象 (Bloom filter, BitSet, Set, SetMultimap, ScoredSortedSet, SortedSet, Map, ConcurrentMap, List, ListMultimap, Queue, BlockingQueue, Deque, BlockingDeque, Semaphore, Lock, ReadWriteLock, AtomicLong, CountDownLatch, Publish / Subscribe, HyperLogLog)。

16、Jedis与Redisson对比有什么优缺点?

Jedis是Redis的Java实现的客户端,其API提供了比较全面的Redis命令的支持;

Redisson实现了分布式和可扩展的Java数据结构,和Jedis相比,功能较为简单,不支持字符串操作,不支持排序、事务、管道、分区等Redis特性。Redisson的宗旨是促进使用者对Redis的关注分离,从而让使用者能够将精力更集中地放在处理业务逻辑上。

17、Redis如何设置密码及验证密码?

设置密码:config set requirepass 123456

授权密码:auth 123456

18、说说Redis哈希槽的概念?

Redis集群没有使用一致性hash,而是引入了哈希槽的概念,Redis集群有16384个哈希槽,每个key通过CRC16校验后对16384取模来决定放置哪个槽,集群的每个节点负责一部分hash槽。

19、Redis集群的主从复制模型是怎样的?

为了使在部分节点失败或者大部分节点无法通信的情况下集群仍然可用,所以集群使用了主从复制模型,每个节点都会有N-1个复制品.

20、Redis集群会有写操作丢失吗?为什么?

Redis并不能保证数据的强一致性,这意味这在实际中集群在特定的条件下可能会丢失写操作。

21、Redis集群之间是如何复制的?

异步复制

22、Redis集群最大节点个数是多少?

16384个。

23、Redis集群如何选择数据库?

Redis集群目前无法做数据库选择,默认在0数据库。

24、怎么测试Redis的连通性?

ping

25、Redis中的管道有什么用?

一次请求/响应服务器能实现处理新的请求即使旧的请求还未被响应。这样就可以将多个命令发送到服务器,而不用等待回复,最后在一个步骤中读取该答复。

这就是管道(pipelining),是一种几十年来广泛使用的技术。例如许多POP3协议已经实现支持这个功能,大大加快了从服务器下载新邮件的过程。

26、怎么理解Redis事务?

事务是一个单独的隔离操作:事务中的所有命令都会序列化、按顺序地执行。事务在执行的过程中,不会被其他客户端发送来的命令请求所打断。

事务是一个原子操作:事务中的命令要么全部被执行,要么全部都不执行。

27、Redis事务相关的命令有哪几个?

MULTI、EXEC、DISCARD、WATCH

28、Redis key的过期时间和永久有效分别怎么设置?

EXPIRE和PERSIST命令。

29、Redis如何做内存优化?

尽可能使用散列表(hashes),散列表(是说散列表里面存储的数少)使用的内存非常小,所以你应该尽可能的将你的数据模型抽象到一个散列表里面。

比如你的web系统中有一个用户对象,不要为这个用户的名称,姓氏,邮箱,密码设置单独的key,而是应该把这个用户的所有信息存储到一张散列表里面。

30、Redis回收进程如何工作的?

一个客户端运行了新的命令,添加了新的数据。

Redi检查内存使用情况,如果大于maxmemory的限制, 则根据设定好的策略进行回收。

一个新的命令被执行,等等。

所以我们不断地穿越内存限制的边界,通过不断达到边界然后不断地回收回到边界以下。

如果一个命令的结果导致大量内存被使用(例如很大的集合的交集保存到一个新的键),不用多久内存限制就会被这个内存使用量超越。


          Top Ambari Interview Questions and Answers 2018      Cache   Translate Page      
1. Ambari Interview Preparation

In our last article, we discussed Ambari Interview Questions and Answers Part 1 . Today, we will see part 2 of top Ambari Interview Questions and Answers. This part contains technical and practical Interview Questions of Ambari, designed by Ambari specialist. If you are preparing for Ambari Interview then you must go through both parts of Ambari Interview Questions and answers. These all are researched questions which will definitely help you to move ahead.

Still, if you face any confusion in these frequently asked Ambari Interview Questions and Answers, we have provided the link of the particular topic. Given links will help you learn more about Apache Ambari .


Top Ambari Interview Questions and Answers 2018

Top Ambari Interview Questions and Answers 2018

2. Best Ambari Interview Questions and Answers

Following are the most asked Ambari Interview Questions and Answers, which will help both freshers and experienced. Let’s discuss these questions and answers for Apache Ambari

Que 1. What are the purposes of using Ambari shell?

Ans.Ambari Supports:

All the functionalities which are available through Ambari web-app. It supports the context-aware availability of commands. completion of a tab. Also, offers optional and required parameter support. Que 2. What is the required action you need to perform if you opt for scheduled maintenance on the cluster nodes?

Ans.Especially, for all the nodes in the cluster, Ambari offers Maintenance mode option. Hence before performing maintenance, we can enable the maintenance mode of Ambari to avoid alerts.

Que 3. What is the role of “ambari-qa” user?

Ans.‘ambari-qa’ user account performs a service check against cluster services that are created by Ambari on all nodes in the cluster.

Que 4. Explain future growth of Apache Ambari?

Ans.We have seen the massive usage of data analysis which brings huge clusters in place, due to increasing demand for big data technologies like Hadoop. Hence, more visibility companies are leaning towards the technologies like Apache Ambari, for better management of these clusters with enhanced operational efficiency.

In addition, HortonWorks is working on Ambari to make it more scalable. Thus, gaining knowledge of Apache Ambari is an added advantage with Hadoop also.

Que 5. State some Ambari components which we can use for automation as well as integration?

Ans.Especially, for automation and Integration, components of Ambari which are imported are separated into three pieces, such as:

Ambari Stacks Blueprints of Ambari Ambari API

However, to make sure that it deals with automation and integration problems carefully, Ambari is built from scratch.

Que 6. In which language is the Ambari Shell is developed?

Ans.In Java language , Ambarishell is developed. Moreover, it is based on Ambari REST client as well as the spring shell framework .

Que 7. State benefits of Hadoop users by using Apache Ambari.

Ans.We can definitely say, the individuals who use Hadoop in their day to day work life, the Apache Ambari is a great gift for them. So, benefits of Apache Ambari :

Simplified Installation process. Easy Configuration and management. Centralized security setup process. Full visibility in terms of Cluster health. Extendable and customizable.

Que 8. Name some independent extensions that contribute to the Ambari codebase?

Ans.They are:

1. Ambari SCOM Management Pack

2. Apache Slider View

Ambari Interview Questions and Answers for freshers Q. 1,2,4,6,7,8 Ambari Interview Questions and Answers for experienced Q. 3,5

Que 9. Can we use Ambari python Client to use of Ambari API’s?

Ans.Yes.

Que 10. What is the process of creating an Ambari client?

Ans.To create an Ambari client, the code is:

from ambari_client.ambari_api import AmbariClient headers_dict={'X-Requested-By':'mycompany'} #Ambari needs X-Requested-By header client = AmbariClient("localhost", 8080, "admin", "admin", version=1,http_header=headers_dict) print client.version print client.host_url print"n" Que 11. How can we see all the clusters that are available in Ambari?

Ans.In order to see all the clusters that are available in Ambari , the code is:

all_clusters = client.get_all_clusters() print all_clusters.to_json_dict() print all_clusters Que 12. How can we see all the hosts that are available in Ambari?

Ans.To see all the hosts that are available in Ambari, the code is:

all_hosts = client.get_all_hosts() print all_hosts print all_hosts.to_json_dict() print"n" Que 13. Name the three layers, Ambari supports?

Ans.Ambari supports several layers:

Core Hadoop Essential Hadoop Hadoop Support

Learn More about Hadoop

Que 14. What are the different methods to set up local repositories?

Ans.To deploy the local repositories, there are two ways:

Mirror the packages to the local repository. Else, download all the Repository Tarball and start building the Local repository

Que 15. How to set up local repository manually?

Ans.In order to set up a local repository manually, steps are:

At very first, set up a host with Apache httpd. Further download Tarball copy for every repository entire contents. However, one has to extract the contents, once it is downloaded. Ambari Interview Questions and Answers for freshers Q. 13,14,15 Ambari Interview Questions and Answers for experienced Q. 10,11,12 Que 16. How is recovery achieved in Ambari?

Ans.Recovery happens in Ambari in the following ways:

Based on actions

In Ambari after a restart master checks for pending actions and reschedules them since every action is persisted here. Also, the master rebuilds the state machines when there is a restart, as the cluster state is persisted in the database. While actions complete master actually crash before recording their completion, when there is a race condition. Well, the actions should be idempotent this is a special consideration taken. And, those actions which have not marked as complete or have failed in the DB, the master restarts them. We can see these persisted actions in Redo Logs.

Based on the desired state
          Linux, Mac OS Software Scripting Test Engineer for Hardware Team in Cupertino, CA      Cache   Translate Page      
CA-Monte Vista, Seeking a Software Test Engineer for the Mac HW team. You will work in a fast paced team developing diagnostic solutions and solving problems relating to current and new Mac products. This will involve developing and debugging software, primarily in Python and Web based code, and working closely with the cross-functional teams. Help with test execution and do qualification tests. Utilize and execu
          什么是Spark,与Hadoop相比,主要有什么不同?      Cache   Translate Page      

什么是Spark?Spark是UC Berkeley AMP lab所开源的类Hadoop MapReduce的通用的并行计算框架,Spark基于map reduce算法实现的分布式计算,拥有Hadoop MapReduce所具有的优点;但不同于MapReduce的是Job中间输出和结果可以保存在内存中,从而不再需要读写HDFS,因此Spark能更好地适用于数据挖掘与机器学习等需要迭代的map reduce的算法。其架构如下图所示:


什么是Spark,与Hadoop相比,主要有什么不同?
Spark与Hadoop的对比

Spark的中间数据放到内存中,对于迭代运算效率更高。

Spark更适合于迭代运算比较多的ML和DM运算。因为在Spark里面,有RDD的抽象概念。

Spark比Hadoop更通用

Spark提供的数据集操作类型有很多种,不像Hadoop只提供了Map和Reduce两种操作。比如map, filter, flatMap, sample, groupByKey, reduceByKey, union, join, cogroup, mapValues, sort,partionBy等多种操作类型,Spark把这些操作称为Transformations。同时还提供Count, collect, reduce, lookup, save等多种actions操作。

这些多种多样的数据集操作类型,给给开发上层应用的用户提供了方便。各个处理节点之间的通信模型不再像Hadoop那样就是唯一的Data Shuffle一种模式。用户可以命名,物化,控制中间结果的存储、分区等。可以说编程模型比Hadoop更灵活。

不过由于RDD的特性,Spark不适用那种异步细粒度更新状态的应用,例如web服务的存储或者是增量的web爬虫和索引。就是对于那种增量修改的应用模型不适合。

容错性

在分布式数据集计算时通过checkpoint来实现容错,而checkpoint有两种方式,一个是checkpoint data,一个是logging the updates。用户可以控制采用哪种方式来实现容错。

可用性

Spark通过提供丰富的Scala, Java,python API及交互式Shell来提高可用性。

Spark与Hadoop的结合

Spark可以直接对HDFS进行数据的读写,同样支持Spark on YARN。Spark可以与MapReduce运行于同集群中,共享存储资源与计算,数据仓库Shark实现上借用Hive,几乎与Hive完全兼容。

Spark的适用场景

Spark是基于内存的迭代计算框架,适用于需要多次操作特定数据集的应用场合。需要反复操作的次数越多,所需读取的数据量越大,受益越大,数据量小但是计算密集度较大的场合,受益就相对较小( 大数据 库架构中这是是否考虑使用Spark的重要因素)

由于RDD的特性,Spark不适用那种异步细粒度更新状态的应用,例如web服务的存储或者是增量的web爬虫和索引。就是对于那种增量修改的应用模型不适合。

总的来说Spark的适用面比较广泛且比较通用。

运行模式

本地模式

Standalone模式

Mesoes模式

yarn模式

Spark生态系统

Shark ( Hive on Spark): Shark基本上就是在Spark的框架基础上提供和Hive一样的H iveQL命令接口,为了最大程度的保持和Hive的兼容性,Shark使用了Hive的API来实现query Parsing和 Logic Plan generation,最后的PhysicalPlan execution阶段用Spark代替Hadoop MapReduce。通过配置Shark参数,Shark可以自动在内存中缓存特定的RDD,实现数据重用,进而加快特定数据集的检索。同时,Shark通过UDF用户自定义函数实现特定的 数据分析 学习算法,使得SQL数据查询和运算分析能结合在一起,最大化RDD的重复使用。

Spark streaming: 构建在Spark上处理Stream数据的框架,基本的原理是将Stream数据分成小的时间片断(几秒),以类似batch批量处理的方式来处理这小部分数据。Spark Streaming构建在Spark上,一方面是因为Spark的低延迟执行引擎(100ms+)可以用于实时计算,另一方面相比基于Record的其它处理框架(如Storm),RDD数据集更容易做高效的容错处理。此外小批量处理的方式使得它可以同时兼容批量和实时数据处理的逻辑和算法。方便了一些需要历史数据和实时数据联合分析的特定应用场合。

Bagel: Pregel on Spark,可以用Spark进行图计算,这是个非常有用的小项目。Bagel自带了一个例子,实现了Google的PageRank算法。


          Software Programmer/Developer (Level 3) - Eagle Professional Resources - Toronto, ON      Cache   Translate Page      
Developing API’s for a variety of difference systems, focusing on data. Experience in python API development specifically in systems data integration;...
From Eagle Professional Resources - Wed, 12 Sep 2018 23:22:37 GMT - View all Toronto, ON jobs
          Python Artificial Intelligence Projects for Beginners      Cache   Translate Page      

eBook Details: Paperback: 162 pages Publisher: WOW! eBook (July 31, 2018) Language: English ISBN-10: 1789539463 ISBN-13: 978-1789539462 eBook Description: Python Artificial Intelligence Projects for Beginners: Build smart applications by implementing real-world artificial intelligence projects and get up and running with Artificial Intelligence using 8 smart and exciting AI applications

The post Python Artificial Intelligence Projects for Beginners appeared first on eBookee: Free eBooks & Video Tutorials Download.


          Leader technique, Python/PHP - Kinessor - Montréal, QC      Cache   Translate Page      
Nous recherchons un Leader technique, Python/PHP Notre candidat(e) idéal possède 7-10 ans d'expérience idéalement avec un arrière-plan télécom / système...
From Indeed - Fri, 17 Aug 2018 16:58:24 GMT - View all Montréal, QC jobs
          Senior Software Engineer - Python - Tucows - Toronto, ON      Cache   Translate Page      
Flask, Tornado, Django. Tucows provides domain names, Internet services such as email hosting and other value-added services to customers around the world....
From Tucows - Sat, 11 Aug 2018 05:36:13 GMT - View all Toronto, ON jobs
          Senior Python Developer - Chisel - Toronto, ON      Cache   Translate Page      
Chisel.ai is a fast-growing, dynamic startup transforming the insurance industry using Artificial Intelligence. Our novel algorithms employ techniques from...
From Chisel - Mon, 23 Jul 2018 19:50:37 GMT - View all Toronto, ON jobs
          Senior Software Developer - Encircle - Kitchener, ON      Cache   Translate Page      
Server Development - Tornado (Python), SQLAlchemy, and Postgresql. We’re Encircle, nice to meet you!...
From Encircle - Thu, 05 Jul 2018 15:05:51 GMT - View all Kitchener, ON jobs
          Software Developer - Encircle - Kitchener, ON      Cache   Translate Page      
Server Development - Tornado (Python), SQLAlchemy, and Postgresql. We’re Encircle, nice to meet you!...
From Encircle - Thu, 05 Jul 2018 15:05:49 GMT - View all Kitchener, ON jobs
          configure the captcha ocr api      Cache   Translate Page      
hi i want to configure captcha ocr api with script that the system will send the captcha to the ocr and the ocr will replay the answer to the api im preffer to use this api http://www.eve.cm (Budget: $750 - $1500 USD, Jobs: Javascript, MySQL, PHP, Python, Software Architecture)
          Full Stack Developer in Python, Django & AngularJS      Cache   Translate Page      
Our company develops a product that makes it easier to start and run your company in France. Our product has been live for about four years and we have more than 15 000 active customers. We are looking... (Budget: $750 - $1500 USD, Jobs: Angular.js, Django, Javascript, Python, Web Scraping)
          Project planning website in python      Cache   Translate Page      
We are creating a project planning website similar to smartsheet.com . It largely has elements from google sheet. We need it to be developed in python/java/node for backend .. and Angular/ReactJs for frontend... (Budget: $250 - $750 USD, Jobs: Django, Git, Javascript, Python, Website Design)
          New Python-based Ransomware Poses as Locky      Cache   Translate Page      

A ransomware family used in attacks in July and August was posing as the infamous Locky ransomware that was highly active in 2016, Trend Micro researchers have discovered. 

read more


          Python Scripting - Accenture - Bengaluru, Karnataka      Cache   Translate Page      
Accenture Technology powers our clients’ businesses with innovative technologies—established and emerging—changing the way their people and customers experience...
From Accenture - Wed, 12 Sep 2018 19:46:48 GMT - View all Bengaluru, Karnataka jobs
          Senior Data Analyst - William E. Wecker Associates, Inc. - Jackson, WY      Cache   Translate Page      
Experience in data analysis and strong computer skills (we use SAS, Stata, R and S-Plus, Python, Perl, Mathematica, and other scientific packages, and standard...
From William E. Wecker Associates, Inc. - Sat, 23 Jun 2018 06:13:20 GMT - View all Jackson, WY jobs
          python-easygui 0.98.1-2 any      Cache   Translate Page      
Python module for very simple, very easy GUI programming
          python-gphoto2 1.8.3-1 x86_64      Cache   Translate Page      
Python interface to libgphoto2
          python-colour 0.1.5-3 any      Cache   Translate Page      
Colour representations manipulation library (RGB, HSL, web, ...)
          python-rawkit 0.6.0-3 any      Cache   Translate Page      
CTypes based LibRaw bindings
          configure the captcha ocr api      Cache   Translate Page      
hi i want to configure captcha ocr api with script that the system will send the captcha to the ocr and the ocr will replay the answer to the api im preffer to use this api http://www.eve.cm (Budget: $750 - $1500 USD, Jobs: Javascript, MySQL, PHP, Python, Software Architecture)
          Full Stack Developer in Python, Django & AngularJS      Cache   Translate Page      
Our company develops a product that makes it easier to start and run your company in France. Our product has been live for about four years and we have more than 15 000 active customers. We are looking... (Budget: $750 - $1500 USD, Jobs: Angular.js, Django, Javascript, Python, Web Scraping)
          configure the captcha ocr api      Cache   Translate Page      
hi i want to configure captcha ocr api with script that the system will send the captcha to the ocr and the ocr will replay the answer to the api im preffer to use this api http://www.eve.cm (Budget: $750 - $1500 USD, Jobs: Javascript, MySQL, PHP, Python, Software Architecture)
          Full Stack Developer in Python, Django & AngularJS      Cache   Translate Page      
Our company develops a product that makes it easier to start and run your company in France. Our product has been live for about four years and we have more than 15 000 active customers. We are looking... (Budget: $750 - $1500 USD, Jobs: Angular.js, Django, Javascript, Python, Web Scraping)
           「她最好會寫程式啦」維密超模懂C++ 拿雙學位寫App…男酸民遭噹爆       Cache   Translate Page      
「她最好會寫程式啦」維密超模懂C++ 拿雙學位寫App…男酸民遭噹爆
美國內衣品牌「維多莉亞的祕密」(Victoria's Secret)模特兒斯科特(Lyndsey Scott)擁有美麗面容及絕妙身材,身高175公分,也是品牌Calvin Klein首位非裔美國模特兒。她的本職工作是網路軟體工程師,能理解Python、C++、Java等程式語言,近來卻被男網友們狠酸工作經歷「她只是用各種語言寫Hello World吧」、「她最好會寫程式啦」,引起性別歧視議論。 《詳全文...》

          Infrastructure Support Specialist (Linux / SAN / UNIX) - Trigyn - Montréal, QC      Cache   Translate Page      
3+ years’ experience programming with SQL, Regular Expressions, XML, BASH, KSH, Perl and Python. Our direct financial services client has an opening for...
From Trigyn - Wed, 08 Aug 2018 22:05:58 GMT - View all Montréal, QC jobs
          C++ developer - ExperTech Personnel Services Inc. - Montréal, QC      Cache   Translate Page      
Coding in C++, KSH, Python and T-SQL on Sybase IQ, Sybase ASE and SQL Server. ExperTech is a leading staffing and recruiting company, based in Montreal, QC and...
From ExperTech Personnel Services Inc. - Fri, 31 Aug 2018 18:55:41 GMT - View all Montréal, QC jobs
          Senior System Analyst (System Administrator) - TELUS Health - TELUS Communications - Montréal, QC      Cache   Translate Page      
Bash, Ksh, Python. 5 years programming experience with Bash, KSH, python. Join our team....
From TELUS Communications - Wed, 15 Aug 2018 18:07:59 GMT - View all Montréal, QC jobs
          Infrastructure Operations Support Specialist - NTT DATA Services - Montréal, QC      Cache   Translate Page      
5+ years experience programming with SQL, Regular Expressions, XML, BASH, KSH, Perl and Python. At NTT DATA Services, we know that with the right people on...
From NTT Data - Wed, 01 Aug 2018 20:04:56 GMT - View all Montréal, QC jobs
          Business Intelligence Analyst - Latham & Watkins LLP - Los Angeles, CA      Cache   Translate Page      
Experience with Tableau or other data visualization tools is preferred, along with experience with R, Python, NoSQL technologies such as Hadoop, Cassandra,...
From Latham & Watkins LLP - Sat, 18 Aug 2018 05:12:35 GMT - View all Los Angeles, CA jobs
          Installing Intel® Performance Libraries and Intel® Distribution for Python Using YUM Repository      Cache   Translate Page      
This page provides general installation and support notes about the Community forum supported Intel® Performance Libraries and Intel® Distribution for Python as they are distributed via the YUM repositories described below.
          Java序列化的状态      Cache   Translate Page      

关键要点

  • Java序列化在很多库中引入了安全漏洞。
  • 对序列化进行模块化处于开放讨论状态。
  • 如果序列化能够成为模块,开发人员将能够将其从攻击表面上移除。
  • 移除其他模块可以消除它们所带来的风险。
  • 插桩提供了一种编织安全控制的方法,提供现代化的防御机制。

多年来,Java的序列化功能饱受 安全漏洞 和zero-day攻击,为此赢得了“ 持续奉献的礼物 ”和“ 第四个不可饶恕的诅咒 ”的绰号。

作为回应,OpenJDK贡献者团队讨论了一些用于限制序列化访问的方法,例如将其 提取到可以被移除的jigsaw模块中 ,让黑客无法攻击那些不存在的东西。

一些文章(例如“ 序列化必须死 ”)提出了这样的建议,将有助于防止 某些流行软件(如VCenter 6.5)的漏洞被利用

什么是序列化?

自从1997年发布 JDK 1.1 以来,序列化已经存在于Java平台中。

它用于在套接字之间共享对象表示,或者将对象及其状态保存起来以供将来使用(反序列化)。

在JDK 10及更低版本中,序列化作为java.base包和java.io.Serializable方法的一部分存在于所有的系统中。

GeeksForGeeks对 序列化的工作原理 进行了详细的描述。

有关更多如何使用序列化的代码示例,可以参看Baeldung对 Java序列化的介绍

序列化的挑战和局限

序列化的局限主要表现在以下两个方面:

  1. 出现了新的对象传输策略,例如JSON、XML、Apache Avro、Protocol Buffers等。
  2. 1997年的序列化策略无法预见现代互联网服务的构建和攻击方式。

进行序列化漏洞攻击的基本前提是找到对反序列化的数据执行特权操作的类,然后传给它们恶意的代码。为了理解完整的攻击过程,可以参看Matthias Kaiser在2015年发表的“ Exploiting Deserialization Vulnerabilities in Java ”一文,其中幻灯片第14页开始提供了相关示例。

其他大部分与序列号有关的安全研究 都是基于Chris Frohoff、Gabriel Lawrence和Alvaro Munoz的工作成果。

序列化在哪里?如何知道我的应用程序是否用到了序列化?

要移除序列化,需要从java.io包开始,这个包是java.base模块的一部分。最常见的使用场景是:

使用这些方法的开发人员应考虑使用其他存储和读回数据的替代方法。Eishay Smith发布了 几个不同序列化库的性能指标 。在评估性能时,需要在基准度量指标中包含安全方面的考虑。默认的Java序列化“更快”一些,但漏洞也会以同样的速度找上门来。

我们该如何降低序列化缺陷的影响?

项目Amber 包含了一个关于将序列化API隔离出来的讨论。我们的想法是将序列化从java.base移动到单独的模块,这样应用程序就可以完全移除它。在确定 JDK 11功能集 时并没有针对该提议得出任何结果,但可能会在未来的Java版本中继续进行讨论。

通过运行时保护来减少序列化暴露

一个可以监控风险并自动化可重复安全专业知识的系统对于很多企业来说都是很有用的。Java应用程序可以将JVMTI工具嵌入到安全监控系统中,通过插桩的方式将传感器植入到应用程序中。Contrast Security是这个领域的一个免费产品,它是JavaOne大会的 Duke's Choice大奖得主 。与其他软件项目(如MySQL或GraalVM)类似, Contrast Security的社区版 对开发人员是免费的。

将运行时插桩应用在Java安全性上的好处是它不需要修改代码,并且可以直接集成到JRE中。

它有点类似于面向切面编程,将非侵入式字节码嵌入到源端(远程数据进入应用程序的入口)、接收端(以不安全的方式使用数据)和转移(安全跟踪需要从一个对象移动到另一个对象)。

通过集成每个“接收端”(如ObjectInputStream),运行时保护机制可以添加额外的功能。在从JDK 9移植反序列化过滤器之前,这个功能对序列化和其他攻击的类型(如SQL注入)来说至关重要。

集成这个运行时保护机制只需要修改启动标志,将javaagent添加到启动选项中。例如,在Tomcat中,可以在bin/setenv.sh中添加这个标志:

CATALINA_OPTS=-javaagent:/Users/ecostlow/Downloads/Contrast/contrast.jar

启动后,Tomcat将会初始化运行时保护机制,并将其注入到应用程序中。关注点的分离让应用程序可以专注在业务逻辑上,而安全分析器可以在正确的位置处理安全性。

其他有用的安全技术

在进行维护时,可以不需要手动列出一长串东西,而是使用像 OWASP Dependency-Check 这样的系统,它可以识别出已知安全漏洞的依赖关系,并提示进行升级。也可以考虑通过像 DependABot 这样的系统进行库的自动更新。

虽然用意很好,但默认的 Oracle序列化过滤器 存在与SecurityManager和相关沙箱漏洞相同的设计缺陷。因为需要混淆角色权限并要求提前了解不可知的事物,限制了这个功能的大规模采用:系统管理员不知道代码的内容,所以无法列出类文件,而开发人员不了解环境,甚至DevOps团队通常也不知道系统其他部分(如应用程序服务器)的需求。

移除未使用模块的安全隐患

Java 9的模块化JDK能够 创建自定义运行时镜像 ,移除不必要的模块,可以使用名为jlink的工具将其移除。这种方法的好处是黑客无法攻击那些不存在的东西。

从提出模块化序列化到应用程序能够实际使用以及使用其他序列化的新功能需要一段时间,但正如一句谚语所说:“种树的最佳时间是二十年前,其次是现在”。

剥离Java的原生序列化功能还应该为大多数应用程序和微服务提供更好的互操作性。通过使用标准格式(如JSON或XML),开发人员可以更轻松地在使用不同语言开发的服务之间进行通信——与Java 7的二进制blob相比,python微服务通常具有更好的读取JSON文档的集成能力。不过,虽然JSON格式简化了对象共享,针对Java和.NET解析器的“ Friday the 13th JSON attacks ”证明了银弹是不存在的( 白皮书 )。

在进行剥离之前,序列化让然保留在java.base中。这些技术可以降低与其他模块相关的风险,在序列化被模块化之后,仍然可以使用这些技术。

为Apache Tomcat 8.5.31模块化JDK 10的示例

在这个示例中,我们将使用模块化的JRE来运行Apache Tomcat,并移除任何不需要的JDK模块。我们将得到一个自定义的JRE,它具有更小的攻击表面,仍然能够用于运行应用程序。

确定需要用到哪些模块

第一步是检查应用程序实际使用的模块。OpenJDK工具jdeps可以对JAR文件的字节码执行扫描,并列出这些模块。像大多数用户一样,对于那些不是自己编写的代码,我们根本就不知道它们需要哪些依赖项或模块。因此,我使用扫描器来检测并生成报告。

列出单个JAR文件所需模块的命令是:

jdeps -s JarFile.jar

它将列出模块信息:

tomcat-coyote.jar -> java.base
tomcat-coyote.jar -> java.management
tomcat-coyote.jar -> not found

最后,每个模块(右边的部分)都应该被加入到一个模块文件中,成为应用程序的基本模块。这个文件叫作module-info.java,文件名带有连字符,表示不遵循标准的Java约定,需要进行特殊处理。

下面的命令组合将所有模块列在一个可用的文件中,在Tomcat根目录运行这组命令:

find . -name *.jar ! -path "./webapps/*" ! -path "./temp/*" -exec jdeps -s {} \; | sed -En "s/.* -\> (.*)/  requires \1;/p" | sort | uniq | grep -v "not found" | xargs -0 printf "module com.infoq.jdk.TomcatModuleExample{\n%s}\n"

这组命令的输出将被写入lib/module-info.java文件,如下所示:

module com.infoq.jdk.TomcatModuleExample{
  requires java.base;
  requires java.compiler;
  requires java.desktop;
  requires java.instrument;
  requires java.logging;
  requires java.management;
  requires java.naming;
  requires java.security.jgss;
  requires java.sql;
  requires java.xml.ws.annotation;
  requires java.xml.ws;
  requires java.xml;
}

这个列表比整个Java模块列表要短得多。

下一步是将这个文件放入JAR中:

javac lib/module-info.java
jar -cf lib/Tomcat.jar lib/module-info.class

最后,为应用程序创建一个JRE:

jlink --module-path lib:$JAVA_HOME/jmods --add-modules ThanksInfoQ_Costlow --output dist

这个命令的输出是一个运行时,包含了运行应用程序所需的恰到好处的模块,没有任何性能开销,也没有了未使用模块中可能存在的安全风险。

与基础JDK 10相比,只用了98个核心模块中的19个。

java --list-modules

com.infoq.jdk.TomcatModuleExample
java.activation@10.0.1
java.base@10.0.1
java.compiler@10.0.1
java.datatransfer@10.0.1
java.desktop@10.0.1
java.instrument@10.0.1
java.logging@10.0.1
java.management@10.0.1
java.naming@10.0.1
java.prefs@10.0.1
java.security.jgss@10.0.1
java.security.sasl@10.0.1
java.sql@10.0.1
java.xml@10.0.1
java.xml.bind@10.0.1
java.xml.ws@10.0.1
java.xml.ws.annotation@10.0.1
jdk.httpserver@10.0.1
jdk.unsupported@10.0.1

运行这个命令后,就可以使用dist文件夹中的运行时来运行应用程序。

看看这个列表:部署插件(applet)消失了,JDBC(SQL)消失了,JavaFX也不见了,很多其他模块也消失了。从性能角度来看,这些模块不再产生任何影响。从安全角度来看,黑客无法攻击那些不存在的东西。保留应用程序所需的模块非常重要,因为如果缺少这些模块,应用程序也无法正常运行。

关于作者

Java序列化的状态 Erik Costlow 是甲骨文的Java 8和9产品经理,专注于安全性和性能。他的安全专业知识涉及威胁建模、代码分析和安全传感器增强。在进入技术领域之前,Erik是一位马戏团演员,可以在三轮垂直独轮车上玩火。

查看英文原文: The State of Java Serialization

 

来自:http://www.infoq.com/cn/articles/java-serialization-aug18

 


          Begin to Code with Python      Cache   Translate Page      
none
          Python Artificial Intelligence Projects for Beginners      Cache   Translate Page      
none
          Looking for Python Expert      Cache   Translate Page      
I am going to run my python project on my local. btw, I can't run now. So I want python expert. This task should be completed at once. If you are expert in python, don't hesitate to apply into this job. (Budget: $10 - $30 USD, Jobs: Linux, Python, Software Architecture)
          Python Örnekleri yazısına ben tarafından yapılan yorumlar      Cache   Translate Page      
iyi çalışmalar
          Python Örnekleri yazısına ben tarafından yapılan yorumlar      Cache   Translate Page      
kardeşim Allah razı olsun 100 kere hata yaptım.Daha yeni başlıyorum programlamaya bu nasıl bişey anlamadım hatayıda çözemedim.Allah senden razı olsun.Sanada iyi çalışmalar.İyi günler.
          Full stack developer - Workbridge Associates - San Clara, MB      Cache   Translate Page      
40% Python/ Java. Strong expertise in Python, Java. Right now they are looking for a Full stack Engineer with experience working with react or Angular on the... $130,000 - $160,000 a year
From Workbridge Associates - Tue, 21 Aug 2018 04:56:08 GMT - View all San Clara, MB jobs
          Full Stack Engineer - Workbridge Associates - San Clara, MB      Cache   Translate Page      
30% Python/ Java. Strong expertise in Python, Java. The ultimate goal is to give life insurance coverage for Health-Conscious groups such as training athletes,... $120,000 - $160,000 a year
From Workbridge Associates - Thu, 02 Aug 2018 01:23:52 GMT - View all San Clara, MB jobs
          Sofware Developer - API - Great West Life Canada Insurance - Winnipeg, MB      Cache   Translate Page      
Coding experience in C#, Java, Python and JavaScript. Write scripts and modules using 3rd party automation tool(s)....
From Indeed - Sat, 01 Sep 2018 00:41:44 GMT - View all Winnipeg, MB jobs
          Security Engineer - SkipTheDishes - Winnipeg, MB      Cache   Translate Page      
Extensive knowledge of Python 3, and Java. Skip is growing at a rapid pace and with that, we continue to expand our elite team of software developers and...
From SkipTheDishes - Thu, 23 Aug 2018 05:38:26 GMT - View all Winnipeg, MB jobs
          Prairies - Business Technology Analyst - Technology Consulting - New Grad 2019 (Undergraduate) - Deloitte - Winnipeg, MB      Cache   Translate Page      
Programming / Scripting Languages (e.g., Java, C#, C/C++, Ruby on Rails, Node.js, Springboot, Python, JavaScript)....
From Deloitte - Mon, 20 Aug 2018 16:57:03 GMT - View all Winnipeg, MB jobs
          Python joins movement to dump 'offensive' master, slave terms      Cache   Translate Page      
Programming language bites its tongue to be more inclusive
50 Shades of Python
          リテール&外食業界の未来を変えていく!エンジニアリングマネージャー募集 by 株式会社リノシス      Cache   Translate Page      
当社サーバーサイドのエンジニアチームをリードする立場として、Ruby/Pythonのスキルを活かして、リテール/外食業界向けプロダクトの開発全般を担って頂きます。 【業務内容】 エンジニアとしてプロダクト・サービス開発に携わっていただきつつ、ソフトウェア開発チームのマネージメントを行っていただきます。チームづくりとメンバーの成長にコミットし、メンバーがワクワクしながら働くことを実現し、最大の成果を出すのがミッションです。 ◎開発するプロダクト例 ■画像解析・音声解析・購入履歴を通じ、「空気を読む」接客エージェント ■顧客別・タイミング別の最適価格算出システム(ダイナミックプライシングxパーソナライゼーション) 具体的には以下の内容が業務の中心となります。 ・プロダクト・サービス開発 ・開発チームのチーム編成や制度の整備 ・各チームメンバーのマネジメント ・チームメンバーへのフィードバック 【必須スキル】 ・エンジニアチームのマネージメント経験 ・Ruby and/or Pythonでの開発経験 ・Webアプリケーション開発経験 ・Web系ベンチャー企業での開発経験 【歓迎スキル】 ・インフラエンジニアとしての経験 ・Railsでの開発経験 ・高トラフィックサービスの開発経験(負荷対策) ・フロントサイド(React, Vue.js等)の経験もあれば尚可 ・Redshiftでの開発経験 エンジニアとしての経験やスキル、幅広い知見を持つフルスタックな方を望んでいます。 少しでも興味を持っていただけたら、ぜひ一度私たちの会社に遊びに来てみませんか? 皆様からのご応募お待ちしております!
          Coaching Vacancies with Cambridge University American Football      Cache   Translate Page      
The Cambridge University Pythons are currently looking to recruit additional positional and Assistant Coaches and a Defensive Coordinator for the
          Splunk Developer - JM Group - Montréal, QC      Cache   Translate Page      
Splunk with Perl or Python or shell...
From JM GROUP - Wed, 22 Aug 2018 03:30:55 GMT - View all Montréal, QC jobs
          100CC pits python      Cache   Translate Page      
I purchased this plane last year and only got a chance to fly once. This is a pitts python that was recovered in a different scheme, Great looking and flying airplane. Comes with a GP123, Matching carbon spinner and Falcon prop, 8711 throughout, and 3 brand new 3000mil life batteries, For any...
          Senior Lead Software Engineer      Cache   Translate Page      
WI-Madison, Senior / Lead Software Engineer (Java, Python, and .Net) Our client in the telecommunications industry has an immediate opportunity for a full time Senior / Lead Software Engineer at their headquarters in Madison, WI. This is a full time, direct placement opportunity! They are a Fortune 1000 company and seeking a seasoned Senior or Lead level Software Engineer that is versed with object-oriented p
          Python Current Date Time      Cache   Translate Page      

We can use Python datetime module to get the current date and time of the local system. from datetime import datetime # Current date time in local system print(datetime.now()) Output: 2018-09-12 14:17:56.456080 Python Current Date If you are interested only in the date of the local system, you can use the datetime date() method. print(datetime.date(datetime.now())) […]

The post Python Current Date Time appeared first on JournalDev.


          Codementor: Dynamic Task Routing in Celery      Cache   Translate Page      
This post was originally published on Celery. The Missing Tutorials (https://www.python-celery.com/) on June 5th, 2018. All source code examples used in this blog post can be found on GitHub: ...
          Kay Hayen: Nuitka this week #6      Cache   Translate Page      

Holiday

In my 2 weeks holiday, I indeed focused on a really big thing, and got more done that I had hoped for. For C types, nuitka_bool, which is a tri-state boolean with true, false and unassigned, can be used for some variables, and executes some operations without going through objects anymore.

bool

Condition codes are no longer special. They all need a boolean value from the expression used as a condition, and there was a special paths for some popular expressions for conditions, but of course not all. That is now a universal thing, conditional statement/expressions will now simply ask to provide a temp variable of value nuitka_bool and then code generation handles it.

For where it is used, code gets a lot lighter, and of course faster, although I didn't measure it yet. Going to Py_True/Py_False and comparing with it, wasn't that optimal, and it's nice this is now so much cleaner as a side effect of that C bool work.

This seems to be so good, that actually it's the default for this to be used in 0.6.0, and that itself is a major break through. Not so much for actual performance, but for structure. Other C types are going to follow soon and will give massive performance gains.

void

And what was really good, is that not only did I get bool to work almost perfectly, I also started work on the void C target type and finished that after my return from holiday last weekend, which lead to new optimization that I am putting in the 0.5.33 release that is coming soon, even before the void code generation is out.

The void C type cannot read values back, and unused values should not be used, so this gives errors for cases where that becomes obvious.

a or b

Consider this expression. The or expression, that one is going to producing a value, which is then released, but not used otherwise. New optimzation creates a conditional statement out of it, which takes a as the condition and if not true, then evaluates b but ignores it.

if not a:
   b

The void evaluation of b can then do further optimization for it.

Void code generation can therefore highlight missed opportunities for this kid of optimization, and found a couple of these. That is why I was going for it, and I feel it pays off. Code generation checking optimization here, is a really nice synergy between the two.

Plus I got all the tests to work with it, and solved the missing optimizations it found very easily. And instead of allocating an object now, not assigning is often creating more obvious code. And that too allowed me to find a couple of bugs by C compiler warnings.

Obviously I will want to run a compile all the world test before making it the default, which is why this will probably become part of 0.6.1 to be the default.

module_var

Previously variable codes were making a hard distinction for module variables and make them use their own helper codes. Now this is encapsulated in a normal C type class like nuitka_bool, or the one for PyObject * variables, and integrates smoothly, and even got better. A sign things are going smooth.

Goto Generators

Still not released. I delayed it after my holiday, and due to the heap generator change, after stabilizing the C types work, I want to first finish a tests/library/compile_python_module.py resume run, which will for a Anaconda3 compile all the code found in there.

Right now it's still doing that, and even found a few bugs. The heap storage can still cause issues, as can changes to cloning nodes, which happens for try nodes and their finally blocks.

This should finish these days. I looked at performance numbers and found that develop is indeed only faster, and factory due to even more optimization will be yet faster, and often noteworthy.

Benchmarks

The Speedcenter of Nuitka is what I use right now, but it's only showing the state of 3 branches and compared to CPython, not as much historical information. Also the organization of tests is poor. At least there is tags for what improved.

After release of Nuitka 0.6.0 I will show more numbers, and I will start to focus on making it easier to understand. Therefore no link right now, google if you are so keen. ;-)

Twitter

During the holiday sprint, and even after, I am going to Tweet a lot about what is going on for Nuitka. So follow me on twitter if you like, I will post important stuff as it happens there:

Follow @kayhayen

And lets not forget, having followers make me happy. So do re-tweets.

Poll on Executable Names

So I put e.g. poll up on Twitter, which is now over. But it made me implement a new scheme, due to popular consensus

Hotfixes

Even more hotfixes. I even did 2 during my holiday, however packages built only later.

Threaded imports on 3.4 or higher of modules were not using the locking they should use. Multiprocessing on Windows with Python3 had even more problems, and the --include-package and --include-module were present, but not working.

That last one was actually very strange. I had added a new option group for them, but not added it to the parser. Result: Option works. Just does not show up in help output. Really?

Help Wanted

If you are interested, I am tagging issues help wanted and there is a bunch, and very like one you can help with.

Nuitka definitely needs more people to work on it.

Plans

Working down the release backlog. Things should be out. I am already working on what should become 0.6.1, but it's not yet 0.5.33 released. Not a big deal, but 0.6.0 has 2 really important fixes for performance regressions that have happened in the past. One is for loops, making that faster is probably like the most important one. The other for constant indexing, probably also very important. Very much measurable in pystone at least.

In the mean time, I am preparing to get int working as a target C type, so e.g. comparisons of such values could be done in pure C, or relatively pure C.

Also, I noticed that e.g. in-place operations can be way more optimized and did stuff for 0.6.1 already in this domain. That is unrelated to C type work, but kind of follows a similar route maybe. How to compare mixed types we know of, or one type only. That kind of things needs ideas and experiments.

Having int supported should help getting some functions to C speeds, or at least much closer to it. That will make noticable effects in many of the benchmarks. More C types will then follow one by one.

Donations

If you want to help, but cannot spend the time, please consider to donate to Nuitka, and go here:

Donate to Nuitka


          Commentaires sur Lire la qualité et le taux de pollution de l’air (API Atmo-Aura) par Nomis      Cache   Translate Page      
Hello, Pour ton script d'envoi de sms, qu'utilise-tu pour l'envoi ? Une clé 3G ou l'API de Free ou autre ? Sinon avec Domoticz tu peux directement manipuler du json avec la lib JSON.lua : https://github.com/rxi/json.lua Tu as aussi un plugin Python pour Domoticz : https://github.com/999LV/PrevAir Il récupère la station la plus proche de chez toi en utilisant les coordonnées GPS saisi dans la conf de Domoticz.
          Introduction to python web scraping and the Beautiful Soup library      Cache   Translate Page      
https://linuxconfig.org/introduction-to-python-web-scraping-and-the-beautiful-soup-library

Objective

Learning how to extract information out of an html page using python and the Beautiful Soup library.

Requirements

  • Understanding of the basics of python and object oriented programming

Difficulty

EASY

Conventions

  • # - requires given linux command to be executed with root privileges either directly as a root user or by use of sudo command
  • $ - given linux command to be executed as a regular non-privileged user

Introduction

Web scraping is a technique which consist in the extraction of data from a web site through the use of dedicated software. In this tutorial we will see how to perform a basic web scraping using python and the Beautiful Soup library. We will use python3 targeting the homepage of Rotten Tomatoes, the famous aggregator of reviews and news for films and tv shows, as a source of information for our exercise.

Installation of the Beautiful Soup library

To perform our scraping we will make use of the Beautiful Soup python library, therefore the first thing we need to do is to install it. The library is available in the repositories of all the major GNU\Linux distributions, therefore we can install it using our favorite package manager, or by using pip, the python native way for installing packages.

If the use of the distribution package manager is preferred and we are using Fedora:
$ sudo dnf install python3-beautifulsoup4
On Debian and its derivatives the package is called beautifulsoup4:
$ sudo apt-get install beautifulsoup4
On Archilinux we can install it via pacman:
$ sudo pacman -S python-beatufilusoup4
If we want to use pip, instead, we can just run:
$ pip3 install --user BeautifulSoup4
By running the command above with the --user flag, we will install the latest version of the Beautiful Soup library only for our user, therefore no root permissions needed. Of course you can decide to use pip to install the package globally, but personally I tend to prefer per-user installations when not using the distribution package manager.

The BeautifulSoup object

Let's begin: the first thing we want to do is to create a BeautifulSoup object. The BeautifulSoup constructor accepts either a string or a file handle as its first argument. The latter is what interests us: we have the url of the page we want to scrape, therefore we will use the urlopen method of the urllib.request library (installed by default): this method returns a file-like object:

from bs4 import BeautifulSoup
from urllib.request import urlopen

with urlopen('http://www.rottentomatoes.com') as homepage:
soup = BeautifulSoup(homepage)
At this point, our soup it's ready: the soup object represents the document in its entirety. We can begin navigating it and extracting the data we want using the built-in methods and properties. For example, say we want to extract all the links contained in the page: we know that links are represented by the a tag in html and the actual link is contained in the href attribute of the tag, so we can use the find_all method of the object we just built to accomplish our task:

for link in soup.find_all('a'):
print(link.get('href'))
By using the find_all method and specifying a as the first argument, which is the name of the tag, we searched for all links in the page. For each link we then retrieved and printed the value of the href attribute. In BeautifulSoup the attributes of an element are stored into a dictionary, therefore retrieving them is very easy. In this case we used the get method, but we could have accessed the value of the href attribute even with the following syntax: link['href']. The complete attributes dictionary itself is contained in the attrs property of the element. The code above will produce the following result:
[...]
https://editorial.rottentomatoes.com/
https://editorial.rottentomatoes.com/24-frames/
https://editorial.rottentomatoes.com/binge-guide/
https://editorial.rottentomatoes.com/box-office-guru/
https://editorial.rottentomatoes.com/critics-consensus/
https://editorial.rottentomatoes.com/five-favorite-films/
https://editorial.rottentomatoes.com/now-streaming/
https://editorial.rottentomatoes.com/parental-guidance/
https://editorial.rottentomatoes.com/red-carpet-roundup/
https://editorial.rottentomatoes.com/rt-on-dvd/
https://editorial.rottentomatoes.com/the-simpsons-decade/
https://editorial.rottentomatoes.com/sub-cult/
https://editorial.rottentomatoes.com/tech-talk/
https://editorial.rottentomatoes.com/total-recall/
[...]
The list is much longer: the above is just an extract of the output, but gives you an idea. The find_all method returns all Tag objects that matches the specified filter. In our case we just specified the name of the tag which should be matched, and no other criteria, so all links are returned: we will see in a moment how to further restrict our search.

A test case: retrieving all "Top box office" titles

Let's perform a more restricted scraping. Say we want to retrieve all the titles of the movies which appear in the "Top Box Office" section of Rotten Tomatoes homepage. The first thing we want to do is to analyze the page html for that section: doing so, we can observe that the element we need are all contained inside a table element with the "Top-Box-Office" id:

Top Box Office
Top Box Office
We can also observe that each row of the table holds information about a movie: the title's scores are contained as text inside a span element with class "tMeterScore" inside the first cell of the row, while the string representing the title of the movie is contained in the second cell, as the text of the a tag. Finally, the last cell contains a link with the text that represents the box office results of the film. With those references, we can easily retrieve all the data we want:

from bs4 import BeautifulSoup
from urllib.request import urlopen

with urlopen('https://www.rottentomatoes.com') as homepage:
soup = BeautifulSoup(homepage.read(), 'html.parser')

# first we use the find method to retrieve the table with 'Top-Box-Office' id
top_box_office_table = soup.find('table', {'id': 'Top-Box-Office'})

# than we iterate over each row and extract movies information
for row in top_box_office_table.find_all('tr'):
cells = row.find_all('td')
title = cells[1].find('a').get_text()
money = cells[2].find('a').get_text()
score = row.find('span', {'class': 'MeterScore'}).get_text()
print('{0} -- {1} (TomatoMeter: {2})'.format(title, money, score))
The code above will produce the following result:
Crazy Rich Asians -- .9M (TomatoMeter: 93%)
The Meg -- .9M (TomatoMeter: 46%)
The Happytime Murders -- .6M (TomatoMeter: 22%)
Mission: Impossible - Fallout -- .2M (TomatoMeter: 97%)
Mile 22 -- .5M (TomatoMeter: 20%)
Christopher Robin -- .4M (TomatoMeter: 70%)
Alpha -- .1M (TomatoMeter: 83%)
BlacKkKlansman -- .2M (TomatoMeter: 95%)
Slender Man -- .9M (TomatoMeter: 7%)
A.X.L. -- .8M (TomatoMeter: 29%)
We introduced few new elements, let's see them. The first thing we have done, is to retrieve the table with 'Top-Box-Office' id, using the find method. This method works similarly to find_all, but while the latter returns a list which contains the matches found, or is empty if there are no correspondence, the former returns always the first result or None if an element with the specified criteria is not found.

The first element provided to the find method is the name of the tag to be considered in the search, in this case table. As a second argument we passed a dictionary in which each key represents an attribute of the tag with its corresponding value. The key-value pairs provided in the dictionary represents the criteria that must be satisfied for our search to produce a match. In this case we searched for the id attribute with "Top-Box-Office" value. Notice that since each id must be unique in an html page, we could just have omitted the tag name and use this alternative syntax:

top_box_office_table = soup.find(id='Top-Box-Office')
Once we retrieved our table Tag object, we used the find_all method to find all the rows, and iterate over them. To retrieve the other elements, we used the same principles. We also used a new method, get_text: it returns just the text part contained in a tag, or if none is specified, in the entire page. For example, knowing that the movie score percentage are represented by the text contained in the span element with the tMeterScore class, we used the get_text method on the element to retrieve it.

In this example we just displayed the retrieved data with a very simple formatting, but in a real-world scenario, we might have wanted to perform further manipulations, or store it in a database.

Conclusions

In this tutorial we just scratched the surface of what we can do using python and Beautiful Soup library to perform web scraping. The library contains a lot of methods you can use for a more refined search or to better navigate the page: for this I strongly recommend to consult the very well written official docs.

          Turn your vim editor into a productivity powerhouse      Cache   Translate Page      
https://opensource.com/article/18/9/vi-editor-productivity-powerhouse

These 20+ useful commands will enhance your experience using the vim editor.

a checklist for a team
Image by : 
opensource.com
x

Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
Editor's note: The headline and article originally referred to the "vi editor." It has been updated to the correct name of the editor: "vim."
A versatile and powerful editor, vim includes a rich set of potent commands that make it a popular choice for many users. This article specifically looks at commands that are not enabled by default in vim but are nevertheless useful. The commands recommended here are expected to be set in a vim configuration file. Though it is possible to enable commands individually from each vim session, the purpose of this article is to create a highly productive environment out of the box.

Before you begin

The commands or configurations discussed here go into the vim startup configuration file, vimrc, located in the user home directory. Follow the instructions below to set the commands in vimrc:
(Note: The vimrc file is also used for system-wide configurations in Linux, such as /etc/vimrc or /etc/vim/vimrc. In this article, we'll consider only user-specific vimrc, present in user home folder.)
In Linux:
  • Open the file with vi $HOME/.vimrc
  • Type or copy/paste the commands in the cheat sheet at the end of this article
  • Save and close (:wq)
In Windows:
  • First, install gvim
  • Open gvim
  • Click Edit --> Startup settings, which opens the _vimrc file
  • Type or copy/paste the commands in the cheat sheet at the end of this article
  • Click File --> Save
Let's delve into the individual vi productivity commands. These commands are classified into the following categories:
  1. Indentation & Tabs
  2. Display & Format
  3. Search
  4. Browse & Scroll
  5. Spell
  6. Miscellaneous

1. Indentation & Tabs

To automatically align the indentation of a line in a file:
set autoindent
Smart Indent uses the code syntax and style to align:
set smartindent
Tip: vim is language-aware and provides a default setting that works efficiently based on the programming language used in your file. There are many default configuration commands, including axs cindent, cinoptions, indentexpr, etc., which are not explained here. syn is a helpful command that shows or sets the file syntax.
To set the number of spaces to display for a tab:
set tabstop=4
To set the number of spaces to display for a “shift operation” (such as ‘>>’ or ‘<<’):
set shiftwidth=4
If you prefer to use spaces instead of tabs, this option inserts spaces when the Tab key is pressed. This may cause problems for languages such as Python that rely on tabs instead of spaces. In such cases, you may set this option based on the file type (see autocmd).
set expandtab

2. Display & Format

To show line numbers:
set number
To wrap text when it crosses the maximum line width:
set textwidth=80
To wrap text based on a number of columns from the right side:
set wrapmargin=2
To identify open and close brace positions when you traverse through the file:
set showmatch

3. Search

To highlight the searched term in a file:
set hlsearch
To perform incremental searches as you type:
set incsearch
To search ignoring case (many users prefer not to use this command; set it only if you think it will be useful):
set ignorecase
To search without considering ignorecase when both ignorecase and smartcase are set and the search pattern contains uppercase:
set smartcase
For example, if the file contains: test
Test
When both ignorecase and smartcase are set, a search for “test” finds and highlights both:
test
Test
A search for “Test” highlights or finds only the second line:
test
Test

4. Browse & Scroll

For a better visual experience, you may prefer to have the cursor somewhere in the middle rather than on the first line. The following option sets the cursor position to the 5th row.
set scrolloff=5
Example:
The first image is with scrolloff=0 and the second image is with scrolloff=5.                                                                                                                                                                       
Tip: set sidescrolloff is useful if you also set nowrap.
To display a permanent status bar at the bottom of the vim screen showing the filename, row number, column number, etc.:
set laststatus=2

5. Spell

vim has a built-in spell-checker that is quite useful for text editing as well as coding. vim recognizes the file type and checks the spelling of comments only in code. Use the following command to turn on spell-check for the English language:
set spell spelllang=en_us

6. Miscellaneous

Disable creating backup file: When this option is on, vim creates a backup of the previous edit. If you do not want this feature, disable it as shown below. Backup files are named with a tilde (~) at the end of the filename.
set nobackup
Disable creating a swap file: When this option is on, vim creates a swap file that exists until you start editing the file. Swapfile is used to recover a file in the event of a crash or a use conflict. Swap files are hidden files that begin with . and end with .swp.
set noswapfile
Suppose you need to edit multiple files in the same vim session and switch between them. An annoying feature that's not readily apparent is that the working directory is the one from which you opened the first file. Often it is useful to automatically switch the working directory to that of the file being edited. To enable this option:
set autochdir
vim maintains an undo history that lets you undo changes. By default, this history is active only until the file is closed. vim includes a nifty feature that maintains the undo history even after the file is closed, which means you may undo your changes even after the file is saved, closed, and reopened. The undo file is a hidden file saved with the .un~ extension.
set undofile
To set audible alert bells (which sound a warning if you try to scroll beyond the end of a line):
set errorbells
If you prefer, you may set visual alert bells:
set visualbell

Bonus

vim provides long-format as well as short-format commands. Either format can be used to set or unset the configuration.
Long format for the autoindent command:
set autoindent
Short format for the autoindent command:
set ai
To see the current configuration setting of a command without changing its current value, use ? at the end:
set autoindent?
To unset or turn off a command, most commands take no as a prefix:
set noautoindent
It is possible to set a command for one file but not for the global configuration. To do this, open the file and type :, followed by the set command. This configuration is effective only for the current file editing session.
For help on a command:
:help autoindent
Note: The commands listed here were tested on Linux with Vim version 7.4 (2013 Aug 10) and Windows with Vim 8.0 (2016 Sep 12).
These useful commands are sure to enhance your vim experience. Which other commands do you recommend?

Cheat sheet

Copy/paste this list of commands in your vimrc file:


" Indentation & Tabs



set autoindent



set smartindent



set tabstop=4



set shiftwidth=4



set expandtab



set smarttab



" Display & format



set number



set textwidth=80



set wrapmargin=2



set showmatch



" Search



set hlsearch



set incsearch



set ignorecase



set smartcase



" Browse & Scroll



set scrolloff=5



set laststatus=2



" Spell



set spell spelllang=en_us



" Miscellaneous



set nobackup



set noswapfile



set autochdir



set undofile



set visualbell



set errorbells



          Bay-Chay-Tian      Cache   Translate Page      
Bay-Chay-Tian.


my 'throne' - wakakaka 

Yes, the phrase above is in Penang Hokkien which means literally ta'mampu-duduk-takhta (singgasana) [couldn't sit or more correctly, couldn't ascend to the throne (of power, eg, as a PM)].

The 'throne' in this post has nothing to do with royal thrones but more with 'holding the top political power'.

It's a popular but sad phrase among Chinese who use it to describe people who were NOT DESTINED to become Emperor (in olden days) or PM in today's political world [or alternatively President in a presidential political system as in the USA, France, South Korea, Taiwan].



Another Chinese phrase would be 'Without the Mandate of Heaven'.



Hillary Rodham Clinton 

Recently, we witnessed a case of bay-chay-tian in the USA where Hillary Rodham Clinton was just not fated to become President of the USA, though she possessed(s) all the qualifications, experience, competency, pedigree and personality, and was expected to have won hands down against a political clown. Instead, the Americans voted in the dungu, wakakaka.



An Australia case would have been the sad failure of Kim Beazley (Labor), one of the most capable Labor minister, to become PM. He was DPM to Paul Keating but after Keating left the party to his stewardship, Labor lost out to the Coalition.



Kim Beazley 

A Rhodes scholar and an expert on American affairs, he was very likeable but never did become what he wanted to be, PM of Australia. Instead Labor jokers like Mark Latham including ultra-narcissist Kevin Rudd succeeded. Fate dealt a cruel joke on Australian politics in the same way as it did recently to the USA.



loose cannon Mark Latham 


ultra-narcissist Mandarin-speaking Kevin Rudd 

In Malaysia of course the most famous person suffering from the curse of bay-chay-tian would be Ku Li. I needn't go into his sad political life to inform you how he was played out left, right and centre. Others you could add to Ku Li's tragic category would be Musa Hitam and the late Tun Dr Ismail, probably the BEST PM we never had but who was called by his Maker too soon. Where they all failed, Abdullah Ahmad Badawi (AAB) succeeded, wakakaka.



Ku Li 


Musa Hitam 


late Tun Dr Ismail 

All the above brings us to Anwar Ibrahim, the man who missed out and missed out and missed out over the years since 1997.



His seeming impatience might have riled Mahathir-adorers and Azmin-worshippers, but how is he to be PM in two years time, as mandated by Pakatan, if he doesn't warm up a bit on power politics (having been out of touch for 21 years including 11 years of imprisonment), and probably replace his incompetent wife as DPM and then be prepared to take over from Mahathir who is fast changing his promise to leave in two years time as he has just changed his promise to dissolve the CEP in 100 days.



Mahathir is a man you cannot trust. He is extremely slithery with his words and a master Machiavellian. To top all these sinister doubts about the Old Man, it's known and proven he hates, abhors and detests Anwar with a vengeance.





Then, in the shadowy wing, Azmin Ali waits patiently like a reticulated python, all coiled up and ready to strike an swallow up the Anwar legacy. His loyalist Zuraida Kamaruddin has even brazenly demanded that Anwar states his stand regarding the way in which the Port Dickson (PD) parliamentary seat was vacated for him, while another of Azmin's dwarfs Tian Chua has expressed his anger at Rafizi very-hush-hush move to enthrone Anwar next to the PM's position.



hmmm, now who shall I fCk up next?

It's all very oppressing, intimidating and forbidding for Anwar whose situational awareness tells him that now must be the time to move.

But Anwar will find his once-popularity with especially the youth is no more - those youths have grown up and a new generation of voters are now in place and who (not realising who Mahathir had been in his 4th Reich) adulate Mahathir and even believe Anwar to be a stumbling block to the Old Man's continuing tenure as PM, preferably (in their moronic minds) for the next 1,000 years. They now are unlikely to support or even vote for Anwar.

National Patriots Association, the military veteran group, has warned Anwar against causing a by-election in PD as they favour the continuation of former-Admiral Danyal Balagopal Abdullah as the local MP, but the bloke himself has insisted on resigning and willingly giving up his PD seat for Anwar to contest.



Danyal Balagopal Abdullah

Nonetheless, it's not nice for Anwar to hear retired Brig-Gen Raja Arshad warning him of a military 'backlash' in PD if he were to stand there in a by-election.



Brig-Jen (rtd) Raja Arshad 

Even Siti Kasim, renowned activist has urged PD voters to vote against Anwar, saying:

PKR dan keluarga Anwar mesti diberi pengajaran kerana ini kali ketiga mereka mengadakan pilihan raya untuk memberi laluan kepada pemimpin utama PKR itu.

Mereka fikir mereka boleh melakukan apa sahaja yang mereka suka sehingga menjejaskan kami? Tidak!




Siti Kasim 

Siti is pissed off that just for the sake of bringing in cultist personality like Anwar, by-elections after by-elections have been unnecessarily happening.

Thus, with several supposedly friendly forces (including Azmin's 
Dökkálfar Dwarfs) now being extremely unfriendly to him, Anwar MUST win this time or he'll be another bay-chay-tian.

I am not politically clued on PD but I am sure the Rafizi's Pandan Cats would have done their mathematics to confidently stand Anwar there.

Ironically, PD has been the location that Anwar had once threatened to reveal happenings of sordid tales about an allegedly amorous (then-DPM) Najib.

That's when I first came to hear of his manmanlai teasing to tell all about Najib's alleged amorous activities during an Ijok by-election campaign. Since then, I have named Anwar as Mr Manmanlai, wakakaka. But I hope PD won't be Najib's last laugh at Anwar.



 (above) Dökkálfar Dwarfs [Latheefa Koya not in photo]

(below) Pandan Cats

Demographics tell us that PD voters comprise 43% Malays, 33% Chinese and 22% Indian. It's the sort of ethnic mix that Mahathir once liked (during his 4th Reich).

When he was PM during the period 1981 to 2003, Mahathir didn't trust the Malays to take him and his party across the finishing line but depended on the nons to do so out of fear of Islamic PAS, thus he liked 'mixed constituencies' along the ratio of 60 to 65% Malays to 35 to 40% nons. Indeed it was Mahathir himself who made the Chinese and Indians into 'King-makers' - he should stop blaming the Chinese for what he did in his personal political interests.



when he boasted of that in 1969, it cost him a seat in parliament and 5 years out in the cold - since then he was more circumspect though deep in his heart he still detests the nons

In GE14, Danyal garnered 36,225 votes to defeat V Mogan of BN-MIC (18,515 votes) and PAS' Mahfuz Roslan (6,594 votes). Thus there is a pro-PH base of around 59%. But this will be a by-election and things may work out differently. If the pro-PH Chinese and Indians don't play Anwar out he will be in parliament soon. Thus PKR and the DAP must work harder for Anwar (unless they don't want to).

Or, will he enter Malaysian history alongside Ku Li, Musa Hitam and Tun Dr Ismail as footnotes?



          Only in Australia: Two Wild Fighting Snakes Fall Through Ceiling Into Family's Bedroom      Cache   Translate Page      

Sometimes snakes need a spot to sort out their differences, and sometimes that spot is your family's spare bedroom. According to USA Today, a pair of male coastal carpet pythons in the middle of a brawl fell through the ceiling duct of a Brisbane, Australia, home and into a bedroom. The change of scenery didn't stop the snakes from fighting. When staff from Snake Catchers Brisbane, Ipswich, Logan & Gold Coast arrived at the home to do their job, the two snakes were still entangled in battle. Lana Field, the snake catcher tasked with separating the pair, filmed her encounter with the . . .
          3 top open source JavaScript chart libraries      Cache   Translate Page      
https://opensource.com/article/18/9/open-source-javascript-chart-libraries

Charts and other visualizations make it easier to convey information from your data.

books on shelves in a library, colorful
Image credits : 
x

Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
Charts and graphs are important for visualizing data and making websites appealing. Visual presentations make it easier to analyze big chunks of data and convey information. JavaScript chart libraries enable you to visualize data in a stunning, easy to comprehend, and interactive manner and improve your website's design.
In this article, learn about three top open source JavaScript chart libraries.

1. Chart.js

Chart.js is an open source JavaScript library that allows you to create animated, beautiful, and interactive charts on your application. It's available under the MIT License.
With Chart.js, you can create various impressive charts and graphs, including bar charts, line charts, area charts, linear scale, and scatter charts. It is completely responsive across various devices and utilizes the HTML5 Canvas element for rendering.
Here is example code that draws a bar chart using the library. We'll include it in this example using the Chart.js content delivery network (CDN). Note that the data used is for illustration purposes only.


<!DOCTYPE html>

<html>

<head>

  <script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.5.0/Chart.min.js"></script>

</head>



<body>

   

    <canvas id="bar-chart" width=300" height="150">




   
   
 
   

As you can see from this code, bar charts are constructed by setting type to bar. You can change the direction of the bar to other types—such as setting type to horizontalBar.
The bars' colors are set by providing the type of color in the backgroundColor array parameter.
The colors are allocated to the label and data that share the same index in their corresponding array. For example, "Latin America," the second label, will be set to "blue" (the second color) and 4 (the second number in the data).
Here is the output of this code.

2. Chartist.js

Chartist.js is a simple JavaScript animation library that allows you to create customizable and beautiful responsive charts and other designs. The open source library is available under the WTFPL or MIT License.
The library was developed by a group of developers who were dissatisfied with existing charting tools, so it offers wonderful functionalities to designers and developers.
After including the Chartist.js library and its CSS files in your project, you can use them to create various types of charts, including animations, bar charts, and line charts. It utilizes SVG to render the charts dynamically.
Here is an example of code that draws a pie chart using the library.


<!DOCTYPE html>

<html>

<head>

   

    <link href="https//cdn.jsdelivr.net/chartist.js/latest/chartist.min.css" rel="stylesheet" type="text/css" />

   

    <style>

        .ct-series-a .ct-slice-pie {

            fill: hsl(100, 20%, 50%); /* filling pie slices */

            stroke: white; /*giving pie slices outline */          

            stroke-width: 5px;  /* outline width */

          }



          .ct-series-b .ct-slice-pie {

            fill: hsl(10, 40%, 60%);

            stroke: white;

            stroke-width: 5px;

          }



          .ct-series-c .ct-slice-pie {

            fill: hsl(120, 30%, 80%);

            stroke: white;

            stroke-width: 5px;

          }



          .ct-series-d .ct-slice-pie {

            fill: hsl(90, 70%, 30%);

            stroke: white;

            stroke-width: 5px;

          }

          .ct-series-e .ct-slice-pie {

            fill: hsl(60, 140%, 20%);

            stroke: white;

            stroke-width: 5px;

          }



    </style>

     </head>



<body>



    <div class="ct-chart ct-golden-section"></div>



    <script src="https://cdn.jsdelivr.net/chartist.js/latest/chartist.min.js"></script>



    <script>

       

      var data = {

            series: [45, 35, 20]

            };



      var sum = function(a, b) { return a + b };



      new Chartist.Pie('.ct-chart', data, {

        labelInterpolationFnc: function(value) {

          return Math.round(value / data.series.reduce(sum) * 100) + '%';

            }

              });

     </script>

</body>

</html>


Instead of specifying various style-related components of your project, the Chartist JavaScript library allows you to use various pre-built CSS styles. You can use them to control the appearance of the created charts.
For example, the pre-created CSS class .ct-chart is used to build the container for the pie chart. And, the .ct-golden-section class is used to get the aspect ratios, which scale with responsive designs and saves you the hassle of calculating fixed dimensions. Chartist also provides other classes of container ratios you can utilize in your project. For styling the various pie slices, you can use the default .ct-series-a class. The letter a is iterated with every series count (a, b, c, etc.) such that it corresponds with the slice to be styled.
The Chartist.Pie method is used for creating a pie chart. To create another type of chart, such as a line chart, use Chartist.Line.
Here is the output of the code.

3. D3.js

D3.js is another great open source JavaScript chart library. It's available under the BSD license. D3 is mainly used for manipulating and adding interactivity to documents based on the provided data.
You can use this amazing 3D animation library to visualize your data using HTML5, SVG, and CSS and make your website appealing. Essentially, D3 enables you to bind data to the Document Object Model (DOM) and then use data-based functions to make changes to the document.
Here is example code that draws a simple bar chart using the library.


<!DOCTYPE html>

<html>

<head>

     

    <style>

    .chart div {

      font: 15px sans-serif;

      background-color: lightblue;

      text-align: right;

      padding:5px;

      margin:5px;

      color: white;

      font-weight: bold;

    }

       

    </style>

     </head>



<body>



    <div class="chart"></div>

   

    <script src="https://cdnjs.cloudflare.com/ajax/libs/d3/5.5.0/d3.min.js"></script>



    <script>



      var data = [342,222,169,259,173];



      d3.select(".chart")

        .selectAll("div")

        .data(data)

          .enter()

          .append("div")

          .style("width", function(d){ return d + "px"; })

          .text(function(d) { return d; });

       

 

    </script>

</body>

</html>


The main concept in using the D3 library is to first apply CSS-style selections to point to the DOM nodes and then apply operators to manipulate them—just like in other DOM frameworks like jQuery.
After the data is bound to a document, the .enter() function is invoked to build new nodes for incoming data. All the methods invoked after the .enter() function will be called for every item in the data.
Here is the output of the code.

Wrapping up

JavaScript charting libraries provide you with powerful tools for implementing data visualization on your web properties. With these three open source libraries, you can enhance the beauty and interactivity of your websites.
Do you know of another powerful frontend library for creating JavaScript animation effects? Please let us know in the comment section below.
          Top August Stories: Data Visualization Cheat Sheet; Basic Statistics in Python      Cache   Translate Page      
Also: Eight iconic examples of data visualisation; Data Scientist guide for getting started with Docker.
          5 tips to improve productivity with zsh      Cache   Translate Page      
https://opensource.com/article/18/9/tips-productivity-zsh

The zsh shell offers countless options and features. Here are 5 ways to boost your efficiency from the command line.

computer screen
Image by : 
opensource.com
x

Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
The Z shell known as zsh is a shell for Linux/Unix-like operating systems. It has similarities to other shells in the sh (Bourne shell) family, such as as bash and ksh, but it provides many advanced features and powerful command line editing options, such as enhanced Tab completion.
It would be impossible to cover all the options of zsh here; there are literally hundreds of pages documenting its many features. In this article, I'll present five tips to make you more productive using the command line with zsh.

1. Themes and plugins

Through the years, the open source community has developed countless themes and plugins for zsh. A theme is a predefined prompt configuration, while a plugin is a set of useful aliases and functions that make it easier to use a specific command or programming language.
The quickest way to get started using themes and plugins is to use a zsh configuration framework. There are many available, but the most popular is Oh My Zsh. By default, it enables some sensible zsh configuration options and it comes loaded with hundreds of themes and plugins.
A theme makes you more productive as it adds useful information to your prompt, such as the status of your Git repository or Python virtualenv in use. Having this information at a glance saves you from typing the equivalent commands to obtain it, and it's a cool look. Here's an example of Powerlevel9k, my theme of choice:

zsh_theme_small.png

zsh Powerlevel9K theme
The Powerlevel9k theme for zsh
In addition to themes, Oh My Zsh bundles tons of useful plugins for zsh. For example, enabling the Git plugin gives you access to a number of useful aliases, such as:


$ alias | grep -i git | sort -R | head -10

g=git

ga='git add'

gapa='git add --patch'

gap='git apply'

gdt='git diff-tree --no-commit-id --name-only -r'

gau='git add --update'

gstp='git stash pop'

gbda='git branch --no-color --merged | command grep -vE "^(\*|\s*(master|develop|dev)\s*$)" | command xargs -n 1 git branch -d'

gcs='git commit -S'

glg='git log --stat'


There are plugins available for many programming languages, packaging systems, and other tools you commonly use on the command line. Here's a list of plugins I use in my Fedora workstation:
git golang fedora docker oc sudo vi-mode virtualenvwrapper

2. Clever aliases

Aliases are very useful in zsh. Defining aliases for your most-used commands saves you a lot of typing. Oh My Zsh configures several useful aliases by default, including aliases to navigate directories and replacements for common commands with additional options such as:


ls='ls --color=tty'

grep='grep  --color=auto --exclude-dir={.bzr,CVS,.git,.hg,.svn}'


In addition to command aliases, zsh enables two additional useful alias types: the suffix alias and the global alias.
A suffix alias allows you to open the file you type in the command line using the specified program based on the file extension. For example, to open YAML files using vim, define the following alias:
alias -s {yml,yaml}=vim
Now if you type any file name ending with yml or yaml in the command line, zsh opens that file using vim:


$ playbook.yml

# Opens file playbook.yml using vim


A global alias enables you to create an alias that is expanded anywhere in the command line, not just at the beginning. This is very useful to replace common filenames or piped commands. For example:
alias -g G='| grep -i'
To use this alias, type G anywhere you would type the piped command:


$ ls -l G do

drwxr-xr-x.  5 rgerardi rgerardi 4096 Aug  7 14:08 Documents

drwxr-xr-x.  6 rgerardi rgerardi 4096 Aug 24 14:51 Downloads


Next, let's see how zsh helps to navigate the filesystem.

3. Easy directory navigation

When you're using the command line, navigating across different directories is one of the most common tasks. Zsh makes this easier by providing some useful directory navigation features. These features are enabled with Oh My Zsh, but you can enable them by using this command:
setopt  autocd autopushd \ pushdignoredups
With these options set, you don't need to type cd to change directories. Just type the directory name, and zsh switches to it:


$ pwd

/home/rgerardi

$ /tmp

$ pwd

/tmp


To move back, type -:
Zsh keeps the history of directories you visited so you can quickly switch to any of them. To see the list, type dirs -v:


$ dirs -v

0       ~

1       /var/log

2       /var/opt

3       /usr/bin

4       /usr/local

5       /usr/lib

6       /tmp

7       ~/Projects/Opensource.com/zsh-5tips

8       ~/Projects

9       ~/Projects/ansible

10      ~/Documents


Switch to any directory in this list by typing ~# where # is the number of the directory in the list. For example:


$ pwd

/home/rgerardi

$ ~4

$ pwd

/usr/local


Combine these with aliases to make it even easier to navigate:


d='dirs -v | head -10'

1='cd -'

2='cd -2'

3='cd -3'

4='cd -4'

5='cd -5'

6='cd -6'

7='cd -7'

8='cd -8'

9='cd -9'


Now you can type d to see the first ten items in the list and the number to switch to it:


$ d

0       /usr/local

1       ~

2       /var/log

3       /var/opt

4       /usr/bin

5       /usr/lib

6       /tmp

7       ~/Projects/Opensource.com/zsh-5tips

8       ~/Projects

9       ~/Projects/ansible

$ pwd

/usr/local

$ 6

/tmp

$ pwd

/tmp


Finally, zsh automatically expands directory names with Tab completion. Type the first letters of the directory names and TAB to use it:


$ pwd

/home/rgerardi

$ p/o/z (TAB)

$ Projects/Opensource.com/zsh-5tips/


This is just one of the features enabled by zsh's powerful Tab completion system. Let's look at some more.

4. Advanced Tab completion

Zsh's powerful completion system is one of its hallmarks. For simplification, I call it Tab completion, but under the hood, more than one thing is happening. There's usually expansion and command completion. I'll discuss them together here. For details, check this User's Guide.
Command completion is enabled by default with Oh My Zsh. To enable it, add the following lines to your .zshrc file:


autoload -U compinit

compinit


Zsh's completion system is smart. It tries to suggest only items that can be used in certain contexts—for example, if you type cd and TAB, zsh suggests only directory names as it knows cd does not work with anything else.
Conversely, it suggests usernames when running user-related commands or hostnames when using ssh or ping, for example.
It has a vast completion library and understands many different commands. For example, if you're using the tar command, you can press Tab to see a list of files available in the package as candidates for extraction:


$ tar -xzvf test1.tar.gz test1/file1 (TAB)

file1 file2


Here's a more advanced example, using git. In this example, when typing TAB, zsh automatically completes the name of the only file in the repository that can be staged:


$ ls

original  plan.txt  zsh-5tips.md  zsh_theme_small.png

$ git status

On branch master

Your branch is up to date with 'origin/master'.



Changes not staged for commit:

  (use "git add ..." to update what will be committed)

  (use "git checkout -- ..." to discard changes in working directory)



        modified:   zsh-5tips.md



no changes added to commit (use "git add" and/or "git commit -a")

$ git add (TAB)

$ git add zsh-5tips.md


It also understands command line options and suggests only the ones that are relevant to the subcommand selected:


$ git commit - (TAB)

--all                  -a       -- stage all modified and deleted paths

--allow-empty                   -- allow recording an empty commit

--allow-empty-message           -- allow recording a commit with an empty message

--amend                         -- amend the tip of the current branch

--author                        -- override the author name used in the commit

--branch                        -- show branch information

--cleanup                       -- specify how the commit message should be cleaned up

--date                          -- override the author date used in the commit

--dry-run                       -- only show the list of paths that are to be committed or not, and any untracked

--edit                 -e       -- edit the commit message before committing

--file                 -F       -- read commit message from given file

--gpg-sign             -S       -- GPG-sign the commit

--include              -i       -- update the given files and commit the whole index

--interactive                   -- interactively update paths in the index file

--message              -m       -- use the given message as the commit message

... TRUNCATED ...


After typing TAB, you can use the arrow keys to navigate the options list and select the one you need. Now you don't need to memorize all those Git options.
There are many options available. The best way to find what is most helpful to you is by using it.

5. Command line editing and history

Zsh's command line editing capabilities are also useful. By default, it emulates emacs. If, like me, you prefer vi/vim, enable vi bindings with the following command:
$ bindkey -v
If you're using Oh My Zsh, the vi-mode plugin enables additional bindings and a mode indicator on your prompt—very useful.
After enabling vi bindings, you can edit the command line using vi commands. For example, press ESC+/ to search the command line history. While searching, pressing n brings the next matching line, and N the previous one. Most common vi commands work after pressing ESC such as 0 to jump to the start of the line, $ to jump to the end, i to insert, a to append, etc. Even commands followed by motion work, such as cw to change a word.
In addition to command line editing, zsh provides several useful command line history features if you want to fix or re-execute previous used commands. For example, if you made a mistake, typing fc brings the last command in your favorite editor to fix it. It respects the $EDITOR variable and by default uses vi.
Another useful command is r, which re-executes the last command; and r , which executes the last command that contains the string WORD.
Finally, typing double bangs (!!) brings back the last command anywhere in the line. This is useful, for instance, if you forgot to type sudo to execute commands that require elevated privileges:


$ less /var/log/dnf.log

/var/log/dnf.log: Permission denied

$ sudo !!

$ sudo less /var/log/dnf.log


These features make it easier to find and re-use previously typed commands.

Where to go from here?

These are just a few of the zsh features that can make you more productive; there are many more. For additional information, consult the following resources:
An Introduction to the Z Shell
A User's Guide to ZSH
Archlinux Wiki
zsh-lovers
Do you have any zsh productivity tips to share? I would love to hear about them in the comments below.

          8 great Python libraries for side projects      Cache   Translate Page      
https://opensource.com/article/18/9/python-libraries-side-projects

These Python libraries make it easy to scratch that personal project itch.

Image by : 
WOCinTech Chat. Modified by Opensource.com. CC BY-SA 4.0
x

Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
We have a saying in the Python/Django world: We came for the language and stayed for the community. That is true for most of us, but something else that has kept us in the Python world is how easy it is to have an idea and quickly work through it over lunch or in a few hours at night.
This month we're diving into Python libraries we love to use to quickly scratch those side-project or lunchtime itches.

To save data in a database on the fly: Dataset

Dataset is our go-to library when we quickly want to collect data and save it into a database before we know what our final database tables will look like. Dataset has a simple, yet powerful API that makes it easy to put data in and sort it out later.
Dataset is built on top of SQLAlchemy, so extending it will feel familiar. The underlying database models are a breeze to import into Django using Django's built-in inspectdb management command. This makes working with existing databases pretty painless.

To scrape data from web pages: Beautiful Soup

Beautiful Soup (BS4 as of this writing) makes extracting information out of HTML pages easy. It's our go-to anytime we need to turn unstructured or loosely structured HTML into structured data. It's also great for working with XML data that might otherwise not be readable.

To work with HTTP content: Requests

Requests is arguably one of the gold standard libraries for working with HTTP content. Anytime we need to consume an HTML page or even an API, Requests has us covered. It's also very well documented.

To write command-line utilities: Click

When we need to write a native Python script, Click is our favorite library for writing command-line utilities. The API is straightforward, well thought out, and there are only a few patterns to remember. The docs are great, which makes looking up advanced features easy.

To name things: Python Slugify

As we all know, naming things is hard. Python Slugify is a useful library for turning a title or description into a unique(ish) identifier. If you are working on a web project and you want to use SEO-friendly URLs, Python Slugify makes this easier.

To work with plugins: Pluggy

Pluggy is relatively new, but it's also one of the best and easiest ways to add a plugin system to your existing application. If you have ever worked with pytest, you have used pluggy without knowing it.

To convert CSV files into APIs: Datasette

Datasette, not to be confused with Dataset, is an amazing tool for easily turning CSV files into full-featured read-only REST JSON APIs. Datasette has tons of features, including charting and geo (for creating interactive maps), and it's easy to deploy via a container or third-party web host.

To handle environment variables and more: Envparse

If you need to parse environment variables because you don't want to save API keys, database credentials, or other sensitive information in your source code, then envparse is one of your best bets. Envparse handles environment variables, ENV files, variable types, and even pre- and post-processors (in case you want to ensure that a variable is always upper or lower case, for instance).

Do you have a favorite Python library for side projects that's not on this list? Please share it in the comments.

          Twijfel over keuze in programmeer richting      Cache   Translate Page      
Replies: 0 Last poster: w3bcrowf3r at 13-09-2018 05:15 Topic is Open Nou ben ik full stack JavaScript aan het leren om te gaan freelancen naast school en heb ik een PHP/Python achtergrond en vooral alleen backend code geschreven. Maar als ik een diploma haal wil ik graag bij een bedrijf werken en stoppen met freelancen. Alleen zie ik niet heel veel vacatures voor full stack JavaScript op indeed.nl? Of zoek ik verkeerd? Ik kan er maar 43 vinden op dit moment. Is full stack in JavaScript niet zo populair in Nederland? En waarom niet als ik vragen kan? Of komt dat, omdat Node.JS niet zo heel oud is? Ik kan altijd naar PHP/Python backend switchen voor baangarantie. Daar zijn honderden vacatures voor zie ik. Maar het liefst wil ik full stack JavaScript, want dan kan ik aan mijn eigen idee werken.
           Comment on WiFi Pineapple NANO: Persistent Recon DB by Cătălin George Feștilă       Cache   Translate Page      
About sqlitebrowser tool I see some strange show on interface , but is working well - finally. Try selinux , selinux python packages , this will allow you good security on linux and network software, even sd card or databases.
          Splunk Developer - JM Group - Montréal, QC      Cache   Translate Page      
Splunk with Perl or Python or shell...
From JM GROUP - Wed, 22 Aug 2018 03:30:55 GMT - View all Montréal, QC jobs
          Mastering Unsupervised Learning with Python [Video]      Cache   Translate Page      

Mastering Unsupervised Learning with Python [Video] English | MP4 | AVC 1920×1080 | AAC 48KHz 2ch | 3h 52m | 834 MB eLearning | Skill level: All Levels

The post Mastering Unsupervised Learning with Python [Video] appeared first on WOW! eBook: Free eBooks & Video Tutorials Download.


          Secret Recipes of the Python Ninja      Cache   Translate Page      

eBook Details: Paperback: 380 pages Publisher: WOW! eBook (May 21, 2018) Language: English ISBN-10: 1788294874 ISBN-13: 978-1788294874 eBook Description: Secret Recipes of the Python Ninja: Test your Python programming skills by solving real-world problems

The post Secret Recipes of the Python Ninja appeared first on WOW! eBook: Free eBooks & Video Tutorials Download.


          Jr-Mid Level Software Developer (Wallingford, CT)      Cache   Translate Page      
CT-Wallingford, Our client is seeking a Jr-Mid Level Software Developer for a contract opportunity in Wallingford, CT! Requirements Bachelor’s Degree in Computer Science, or relevant work experience 1-5 years of professional software development experience 1-5 years developing applications for web services or applications Proficiency in (one or more): Java, C+, Objective-C, Ruby, Python, HTML 5, CSS and JavaScrip
          Junior Full-Stack Developer      Cache   Translate Page      
CT-Wallingford, Location: Wallingford, CT Duration: 3 months Description: Basic Qualifications - Bachelor’s Degree in Computer Science, or relevant work experience - 1-5 years of professional software development experience - 1-5 years developing applications for web services or applications - Proficiency in (one or more): Java, C+, Objective-C, Ruby, Python, HTML 5, CSS and JavaScript Preferred Qualifications -
          SparkStreaming下Python报net.jpountz.lz4.LZ4BlockInputStream的解决      Cache   Translate Page      
none
          Python faker 假数据填充,测试      Cache   Translate Page      
none
          Java Trainers, Python Trainers - S.A.S Institute - Ghaziabad, Uttar Pradesh      Cache   Translate Page      
TRAINERS ( JAVA, PYTHON, PHP, .NET, ANDROID, IOS, TESTING, ENGLISH, IELTS, GERMAN, FRENCH, DIGITAL MAKT ) - Req. Java Trainers, Python Trainers....
From Indeed - Sun, 02 Sep 2018 09:38:55 GMT - View all Ghaziabad, Uttar Pradesh jobs
          「Hey Macarena」有助提高人工呼吸成功率 研究未出已經有hi班牙電視台拍cult片級廣告介紹?      Cache   Translate Page      

安達盧西亞 – 早前我地報導過巴賽隆那大學研究就發現,1990年代無厘頭名曲「Hey Macarena」卻可以成功提高人工呼吸嘅成功率。 而hi班牙有記者,日前搵到一個地方台嘅CPR教育影片,真係拍咗主角要開著「Hey Macarena」,然後幫「病患」做 CPR。 而記者更加表示,著名荒誕劇情片導演愛慕華度都唔夠黎,因為唔難發現段片中「病患」,倒地非常做戲,而且其夫人一時緊張到對住聖母祈禱,聽到「Hey Macarena」又開始跳舞,真係仲精彩過主角。 O en la televisión local de Écija trabajan codo con codo Woody Allen, Pedro Almodóvar y los Monty Python o hay por ahí un genio que no está recibiendo todo el...

The post 「Hey Macarena」有助提高人工呼吸成功率 研究未出已經有hi班牙電視台拍cult片級廣告介紹? appeared first on 寰雨膠事錄 Gaus.ee.


          Бывшая модель Victoria’s Seсret Линдси Скотт серьезно увлекается программированием (7 фото)      Cache   Translate Page      
В одном из аккаунтов в Instagram, где публикуются шутки и юмор про тему IT, была опубликована фотография известной американской модели Линдси Скотт. Фотография была подписана: "Эта модель Victoria's Secret умеет программировать на Python, C++, Java, MIPS и Objective-C". Конечно же, реакция пользователей сети не заставила себя долго ждать, ведь мало кто поверил в это, и все стали шутить на эту тему.

http://feedproxy.google.com/~r/trinixy/BQMg/~3/yqND3M4C9rE/164485-byvshaya-model-victorias-sesret-lindsi-skott-serezno-uvlekaetsya-programmirovaniem-7-foto.html


          Leader technique, Python/PHP - Kinessor - Montréal, QC      Cache   Translate Page      
Nous recherchons un Leader technique, Python/PHP Notre candidat(e) idéal possède 7-10 ans d'expérience idéalement avec un arrière-plan télécom / système...
From Indeed - Fri, 17 Aug 2018 16:58:24 GMT - View all Montréal, QC jobs
          Senior Software Engineer - Python - Tucows - Toronto, ON      Cache   Translate Page      
Flask, Tornado, Django. Tucows provides domain names, Internet services such as email hosting and other value-added services to customers around the world....
From Tucows - Sat, 11 Aug 2018 05:36:13 GMT - View all Toronto, ON jobs
          Senior Python Developer - Chisel - Toronto, ON      Cache   Translate Page      
Chisel.ai is a fast-growing, dynamic startup transforming the insurance industry using Artificial Intelligence. Our novel algorithms employ techniques from...
From Chisel - Mon, 23 Jul 2018 19:50:37 GMT - View all Toronto, ON jobs
          Senior Software Developer - Encircle - Kitchener, ON      Cache   Translate Page      
Server Development - Tornado (Python), SQLAlchemy, and Postgresql. We’re Encircle, nice to meet you!...
From Encircle - Thu, 05 Jul 2018 15:05:51 GMT - View all Kitchener, ON jobs
          Software Developer - Encircle - Kitchener, ON      Cache   Translate Page      
Server Development - Tornado (Python), SQLAlchemy, and Postgresql. We’re Encircle, nice to meet you!...
From Encircle - Thu, 05 Jul 2018 15:05:49 GMT - View all Kitchener, ON jobs
          [آموزش] دانلود Udemy 2018 Fullstack: Laravel 5.6 With QRCodes, APIs, Android/IOS - آموزش لاراول 5.6 با کیوآر کد، ای پی آی و اندروید/آی او اس      Cache   Translate Page      

دانلود Udemy 2018 Fullstack: Laravel 5.6 With QRCodes, APIs, Android/IOS - آموزش لاراول 5.6 با کیوآر کد، ای پی آی و اندروید/آی او اس#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

 Laravel یکی از فریم‎ ورک‎ های زبان PHP است که برای توسعه اپلیکیشن ‎های وب در نظر گرفته شده است و بر پایه MVC کار می‎ کند. فریم‎ ورک لاراول، برنامه‎ نویسی برنامه‎ های کاربردی تحت وب با زبان PHP را ساده‎تر می‎ نماید و کمک بسزایی برای انجام پروژه ‎های PHP و توسعه آسان آن‎ها می‎ کند. فریم ‎ورک Laravel بر روی اجزای مختلف فریم ورک symfony ساخته شده است و به برنامه شما پایه‎ای بزرگ از کد ‎های قابل اعتماد و تست شده می ‎دهد. لاراول مجموعه ای از بهترین راه حل ها با سینتکس پر معنا ...


http://p30download.com/82110

مطالب مرتبط:



دسته بندی: دانلود » آموزش » برنامه نویسی و طراحی وب
برچسب ها: , , , , , , , , , , , , ,
لینک های مفید: خرید کارت شارژ, شارژ مستقیم, پرداخت قبض, خرید آنتی ویروس, خرید لایسنس آنتی ویروس, تبلیغات در اینترنت, تبلیغات اینترنتی
© حق مطلب و تصویر برای پی سی دانلود محفوظ است همین حالا مشترک این پایگاه شوید!
لینک دانلود: http://p30download.com/fa/entry/82110


          [آموزش] دانلود Udemy Dart and Flutter The Complete Developer's Guide - آموزش کامل توسعه دارت و فلاتر      Cache   Translate Page      

دانلود Udemy Dart and Flutter The Complete Developer's Guide - آموزش کامل توسعه دارت و فلاتر#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

دارت زبان برنامه‌نویسی است که توسط گوگل توسعه داده می‌شود. هدف دارت جایگزین کردن جاوااسکریپت که زبان داخلی مرورگرهای وب است می‌باشد. دارت راه حلی برای مشکلات موجود در جاوا اسکریپت (به‌طور مثال مشکل حافظه) می‌باشد که کارایی بهتر، قابلیت استفاده ساده‌تر برای پروژه‌های بزرگ و امنیت بیشتری را فراهم می‌کند. گوگل همچنین بسیار تلاش دارد تا دارت را پیچیده تر بسازد و ویژگی‌ها و قابلیت‌های فراوانی به آن ببخشد. دارت زبانی برپایه کلاس، وراثت یگانه و شی گرایی است که گرامر آن شبیه زبان C بوده و دارای Interface،reified generics کلاسهای Abstract و Optional typing می‌باشد. type annotationهای ...


http://p30download.com/82156

مطالب مرتبط:



دسته بندی: دانلود » آموزش » برنامه نویسی و طراحی وب
برچسب ها: , , , , , , , , , , , , ,
لینک های مفید: خرید کارت شارژ, شارژ مستقیم, پرداخت قبض, خرید آنتی ویروس, خرید لایسنس آنتی ویروس, تبلیغات در اینترنت, تبلیغات اینترنتی
© حق مطلب و تصویر برای پی سی دانلود محفوظ است همین حالا مشترک این پایگاه شوید!
لینک دانلود: http://p30download.com/fa/entry/82156


          Software Development Senior Engineer - Seagate Technology - Singapore      Cache   Translate Page      
1. Design and develop test software in C/C++/Python for hard-disk firmware verification under Linux and Windows. 2. Maintain and enhance existing software...
From Seagate Technology - Tue, 11 Sep 2018 07:48:16 GMT - View all Singapore jobs
          Why Containers are the Future      Cache   Translate Page      

Software deployment has been a major problem for decades. On the client and the server.

On the client, the inability to deploy apps to devices without breaking other apps (or sometimes the client operating system (OS)) has pushed most business software development to relying entirely on the client's browser as a runtime. Or in some cases you may leverage the deployment models of per-platform "stores" from Apple, Google, or Microsoft.

On the server, all sorts of solutions have been attempted, including complex and costly server-side management/deployment software. Over the past many years the industry has mostly gravitated toward the use of virtual machines (VMs) to ease some of the pain, but the costly server-side management software remains critical.

At some point containers may revolutionize client deployment, but right now they are in the process of revolutionizing server deployment, and that's where I'll focus in the remainder of this post.

Fairly recently the concept of containers, most widely recognized with Docker, has gained rapid acceptance.

tl;dr

Containers offer numerous benefits over older IT models such as virtual machines. Containers integrate smoothly into DevOps; streamlining and stabilizing the move from source code to deployable assets. Containers also standardize the deployment and runtime model for applications and services in production (and test/staging). Containers are an enabling technology for microservice architecture and DevOps.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Virtual Machines to Containers

Containers are somewhat like virtual machines, except they are much lighter weight and thus offer major benefits. A VM virtualizes the hardware, allowing installation of the OS on "fake" hardware, and your software is installed and run on that OS. A container virtualizes the OS, allowing you to install and run your software on this "fake" OS.

In other words, containers virtualize at a higher level than VMs. This means that where a VM takes many seconds to literally boot up the OS, a container doesn't boot up at all, the OS is already there. It just loads and starts our application code. This takes fractions of a second.

Where a VM has a virtual hard drive that contains the entire OS, plus your application code, plus everything else the OS might possibly need, a container has an image file that contains your application code and any dependencies required by that app. As a result, the image files for a container are much smaller than a VM hard drive.

Container image files are stored in a repository so they can be easily managed and then downloaded to physical servers for execution. This is possible because they are so much smaller than a virtual hard drive, and the result is a much more flexible and powerful deployment model.

Containers vs PaaS/FaaS

Platform as a Service and Functions as a Service have become very popular ways to build and deploy software, especially in public clouds such as Microsoft Azure. Sometimes FaaS is also referred to as "serverless" computing, because your code only uses resources while running, and otherwise doesn't consume server resources; hence being "serverless".

The thing to keep in mind is that PaaS and FaaS are both really examples of container-based computing. Your cloud vendor creates a container that includes an OS and various other platform-level dependencies such as the .NET Framework, nodejs, Python, the JDK, etc. You install your code into that pre-built environment and it runs. This is true whether you are using PaaS to host a web site, or FaaS to host a function written in C#, JavaScript, or Java.

I always think of this as a spectrum. On one end are virtual machines, on the other is PaaS/FaaS, and in the middle are Docker containers.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

VMs give you total control at the cost of you needing to manage everything. You are forced to manage machines at all levels, from OS updates and patches, to installation and management of platform dependencies like .NET and the JDK. Worse, there's no guarantee of consistency between instances of your VMs because each one is managed separately.

PaaS/FaaS give you essentially zero control. The vendor manages everything - you are forced to live within their runtime (container) model, upgrade when they say upgrade, and only use versions of the platform they currently support. You can't get ahead or fall behind the vendor.

Containers such as Docker give you some abstraction and some control. You get to pick a consistent base image and add in the dependencies your code requires. So there's consistency and maintainability that's far superior to a VM, but not as restrictive as PaaS/FaaS.

Another key aspect to keep in mind, is that PaaS/FaaS models are vendor specific. Containers are universally supported by all major cloud vendors, meaning that the code you host in your containers is entirely separated from anything specific to a given cloud vendor.

Containers and DevOps

DevOps has become the dominant way organizations think about the development, security, QA, deployment, and runtime monitoring of apps. When it comes to deployment, containers allow the image file to be the output of the build process.

With a VM model, the build process produces assets that must be then deployed into a VM. But with containers, the build process produces the actual image that will be loaded at runtime. No need to deploy the app or its dependencies, because they are already in the image itself.

This allows the DevOps pipeline to directly output a file, and that file is the unit of deployment!

No longer are IT professionals needed to deploy apps and dependencies onto the OS. Or even to configure the OS, because the app, dependencies, and configuration are all part of the DevOps process. In fact, all those definitions are source code, and so are subject to change tracking where you can see the history of all changes.

Servers and Orchestration

I'm not saying IT professionals aren't needed anymore. At the end of the day containers do run on actual servers, and those servers have their own OS plus the software to manage container execution. There are also some complexities around networking at the host OS and container levels. And there's the need to support load distribution, geographic distribution, failover, fault tolerance, and all the other things IT pros need to provide in any data center scenario.

With containers the industry is settling on a technology called Kubernetes (K8S) as the primary way to host and manage containers on servers.

Installing and configuring K8S is not trivial. You may choose to do your own K8S deployment in your data center, but increasingly organizations are choosing to rely on managed K8S services. Google, Microsoft, and Amazon all have managed Kubernetes offerings in their public clouds. If you can't use a public cloud, then you might consider using on-premises clouds such as Azure Stack or OpenStack, where you can also gain access to K8S without the need for manual installation and configuration.

Regardless of whether you use a managed public or private K8S cloud solution, or set up your own, the result of having K8S is that you have the tools to manage running container instances across multiple physical servers, and possibly geographic data centers.

Managed public and private clouds provide not only K8S, but also the hardware and managed host operating systems, meaning that your IT professionals can focus purely on managing network traffic, security, and other critical aspects. If you host your own K8S then your IT pro staff also own the management of hardware and the host OS on each server.

In any case, containers and K8S radically reduce the workload for IT pros in terms of managing the myriad VMs needed to host modern microservice-based apps, because those VMs are replaced by container images, managed via source code and the DevOps process.

Containers and Microservices

Microservice architecture is primarily about creating and running individual services that work together to provide rich functionality as an overall system.

A primary attribute (in my view the primary attribute) of services is that they are loosely coupled, sharing no dependencies between services. Each service should be deployed separately as well, allowing for indendent versioning of each service without needing to deploy any other services in the system.

Because containers are a self-contained unit of deployment, they are a great match for a service-based architecture. If we consider that each service is a stand-alone, atomic application that must be independently deployed, then it is easy to see how each service belongs in its own container image.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

This approach means that each service, along with its dependencies, become a deployable unit that can be orchestrated via K8S.

Services that change rapidly can be deployed frequently. Services that change rarely can be deployed only when necessary. So you can easily envision services that deploy hourly, daily, or weekly, while other services will deploy once and remain stable and unchanged for months or years.

Conclusion

Clearly I am very positive about the potential of containers to benefit software development and deployment. I think this technology provides a nice compromise between virtual machines and PaaS, while providing a vendor-neutral model for hosting apps and services.


          Python A-Z™: Python For Data Science With Real Exercises      Cache   Translate Page      
Python A-Z™: Python For Data Science With Real Exercises#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
Python A-Z�: Python For Data Science With Real Exercises
MP4 | Video: AVC 1280x720 | Audio: AAC 44KHz 2ch | Duration: 11 Hours | Lec: 69 | 2.18 GB
Genre: eLearning | Language: English

Programming In Python For Data Analytics And Data Science. Learn Statistical Analysis, Data Mining And Visualization

There are lots of Python courses and lectures out there. However, Python has a very steep learning curve and students often get overwhelmed. This course is different!


          Building Amazing GUI Applications in Python 3 from Scratch      Cache   Translate Page      

Building Amazing GUI Applications in Python 3 from Scratch
Building amazing GUI applications in Python 3 from scratch
MP4 | Video: h264, 1280x720 | Audio: AAC, 44.1 KHz, 2 Ch
Genre: eLearning | Language: English | Duration: 5.5 hour | Size: 2.38 GB


          Python Application and Platform Developer - Plotly - Montréal, QC      Cache   Translate Page      
Engage our Open Source communities and help take stewardship of our OSS projects. Plotly is hiring Pythonistas!...
From Plotly - Thu, 06 Sep 2018 15:03:02 GMT - View all Montréal, QC jobs
          Install Jupyter Notebook and TensorFlow On Ubuntu 18.04 Server      Cache   Translate Page      

Here is How To Install Jupyter Notebook and TensorFlow On Ubuntu 18.04 Server. It is probably easy to install Anaconda for Python packages.

The post Install Jupyter Notebook and TensorFlow On Ubuntu 18.04 Server appeared first on The Customize Windows.


          Senior Lead Software Engineer      Cache   Translate Page      
WI-Madison, Senior / Lead Software Engineer (Java, Python, and .Net) Our client in the telecommunications industry has an immediate opportunity for a full time Senior / Lead Software Engineer at their headquarters in Madison, WI. This is a full time, direct placement opportunity! They are a Fortune 1000 company and seeking a seasoned Senior or Lead level Software Engineer that is versed with object-oriented p
          I need a coder who's familiar with fastai python library      Cache   Translate Page      
I am working on an old kaggle competition, multilabel image classification, as my capstone project and I encounter some problems that I can't figure out by myself. Most of the structures are already done, I already did the training... (Budget: $30 - $250 USD, Jobs: Machine Learning, Python)
          Programmer/Analyst (Research Data Management) - University of Saskatchewan - Saskatoon, SK      Cache   Translate Page      
Java, JavaScript, Python, PHP, HTML, YAML, CSS, Git, Angular, Ansible, Grunt, Jenkins, JIRA, Confluence, Docker, Django.... $62,850 - $98,205 a year
From University of Saskatchewan - Mon, 30 Jul 2018 18:22:24 GMT - View all Saskatoon, SK jobs
          PSTN - Senior VoIP Software Developer - LogMeIn - Rimouski, QC      Cache   Translate Page      
Both open-source and homegrown elements are used in a variety of programming languages (mostly C, Python, Java, PHP)....
From LogMeIn - Fri, 10 Aug 2018 22:15:50 GMT - View all Rimouski, QC jobs
          TradebooX - Software Developer - Multidisciplinary - Python/Javascript (1-7 yrs) Delh      Cache   Translate Page      
TradebooX - Delhi, DL Seeking seasoned Software Developers and *Technical* *Writers* with strong multidisciplinary experience, especially Python,...
          Splunk Developer - JM Group - Montréal, QC      Cache   Translate Page      
Splunk with Perl or Python or shell...
From JM GROUP - Wed, 22 Aug 2018 03:30:55 GMT - View all Montréal, QC jobs
          8 个用于业余项目的优秀 Python 库      Cache   Translate Page      

这些库可以使你更容易构架个人项目。

在 Python/Django 的世界里有这样一个谚语:为语言而来,为社区而留。对绝大多数人来说的确是这样的,但是,还有一件事情使得我们一直停留在 Python 的世界里,不愿离开,那就是我们可以很容易地利用一顿午餐或晚上几个小时的时间,把一个想法快速地实现出来。

这个月,我们来探讨一些我们喜欢用来快速完成业余项目side projects或打发午餐时间的 Python 库。

在数据库中即时保存数据:Dataset

当我们想要在不知道最终数据库表长什么样的情况下,快速收集数据并保存到数据库中的时候,Dataset 库将是我们的最佳选择。Dataset 库有一个简单但功能强大的 API,因此我们可以很容易的把数据保存下来,之后再进行整理。

Dataset 建立在 SQLAlchemy 之上,所以如果需要对它进行扩展,你会感到非常熟悉。使用 Django 内建的 inspectdb 管理命令可以很容易地把底层数据库模型导入 Django 中,这使得和现有数据库一同工作不会出现任何障碍。

从网页抓取数据:Beautiful Soup

Beautiful Soup(一般写作 BS4)库使得从 HTML 网页中提取信息变得非常简单。当我们需要把非结构化或弱结构化的 HTML 转换为结构化数据的时候,就需要使用 Beautiful Soup 。用它来处理 XML 数据也是一个很好的选择,否则 XML 的可读性或许会很差。

和 HTTP 内容打交道:Requests

当需要和 HTTP 内容打交道的时候,Requests 毫无疑问是最好的标准库。当我们想要抓取 HTML 网页或连接 API 的时候,都离不开 Requests 库。同时,它也有很好的文档。

编写命令行工具:Click

当需要写一个简单的 Python 脚本作为命令行工具的时候,Click 是我最喜欢用的库。它的 API 非常直观,并且在实现时经过了深思熟虑,我们只需要记住很少的几个模式。它的文档也很优秀,这使得学习其高级特性更加容易。

对事物命名:Python Slugify

众所周知,命名是一件困难的事情。Python Slugify 是一个非常有用的库,它可以把一个标题或描述转成一个带有特性的唯一标识符。如果你正在做一个 Web 项目,并且你想要使用对搜索引擎优化友好SEO-friendly的链接,那么,使用 Python Slugify 可以让这件事变得很容易。

和插件打交道:Pluggy

Pluggy 库相对较新,但是如果你想添加一个插件系统到现有应用中,那么使用 Pluggy 是最好也是最简单的方式。如果你使用过 pytest,那么实际上相当于已经使用过 Pluggy 了,虽然你还不知道它。

把 CSV 文件转换到 API 中:DataSette

DataSette 是一个神奇的工具,它可以很容易地把 CSV 文件转换为全特性的只读 REST JSON API,同时,不要把它和 Dataset 库混淆。Datasette 有许多特性,包括创建图表和 geo(用于创建交互式地图),并且很容易通过容器或第三方网络主机进行部署。

处理环境变量等:Envparse

如果你不想在源代码中保存 API 密钥、数据库凭证或其他敏感信息,那么你便需要解析环境变量,这时候 envparse 是最好的选择。Envparse 能够处理环境变量、ENV 文件、变量类型,甚至还可以进行预处理和后处理(例如,你想要确保变量名总是大写或小写的)。

有什么你最喜欢的用于业余项目的 Python 库不在这个列表中吗?请在评论中和我们分享。


via: https://opensource.com/article/18/9/python-libraries-side-projects

作者:Jeff Triplett 选题:lujun9972 译者:ucasFL 校对:wxy

本文由 LCTT 原创编译,Linux中国 荣誉推出


          C/Python Software Developer – Roodepoort – R600k      Cache   Translate Page      
e-Merge IT Recruitment - Roodepoort, Johannesburg - This ROODEPOORT based client is looking for a C/Python Software Developer to produce or modify software through requirements elicitation, analysis, design, Integration, testing and deployment.Job Description:Understand and follow the project-specific or maintenance processTake pa...
          Advanced Network Reconnaissance Toolkit: badKarma      Cache   Translate Page      
   badKarma is a python3 GTK+ toolkit that aim to assist penetration testers during all the network infrastructure penetration testing activity phases. It allow testers to save time by having...

          read online Invent Your Own Computer Games With Python, 4e full       Cache   Translate Page      

Download read online Invent Your Own Computer Games With Python, 4e full Ebook Free Download Here https://booksmarketingsale.blogspot.com/?book=1593277954 none
          Dynamo: Sorting a List of Lists by a Value in the Sub-List in Python      Cache   Translate Page      
none
          GRPC 1.15.0 发布,Google 高性能 RPC 框架      Cache   Translate Page      

GRPC 1.15.0 已发布,GRPC 是一个高性能、开源、通用的 RPC 框架,面向移动和 HTTP/2 设计,是由谷歌发布的首款基于 Protocol Buffers 的 RPC 框架。 GRPC 基于 HTTP/2 标准设计,带来诸如双向流、流控、头部压缩、单 TCP 连接上的多复用请求等特性。这些特性使得其在移动设备上表现更好,更省电且节省空间占用。

该版本包含优化、改进和 bug 修复,亮点包括:

Core

  • Document SSL portability and performance considerations. 详情

  • Simplify call arena size growth. (#16396)

  • Make gRPC buildable with AIX and Solaris (no official support). (#15926)

  • PF: Check connectivity state before watching. (#16306)

  • Added system roots feature to load roots from OS trust store. (#16083)

  • Fix c-ares compilation under windows (but doesn't yet enable windows DNS queries), and then enables address sorting on Windows. (#16163)

  • Fix re-resolution in pick first. (#16076)

  • Allow error strings in final_info to propagate to filters on call destruction. (#16104)

  • Add resolver executor . (#16010)

  • Data race fix for lockfree_event. (#16053)

  • Channelz: Expose new Core API. (#16022)

C++

  • cmake: disable assembly optimizations only when necessary. (#16415)

  • C++ sync server: Return status RESOURCE_EXHAUSTED if no thread quota available. (#16356)

  • Use correct target name for gflags-config.cmake. (#16343)

  • Make should generate pkg-config file for gpr as well. (#15295)

  • Restrict the number of threads in C++ sync server. (#16217)

  • Allow reset of connection backoff. (#16225)

C#

  • Add experimental support for Xamarin.Android and Xamarin.iOS, added Helloworld example for Xamarin. 详情

  • Add experimental support for Unity Android and iOS. 详情

  • Add server reflection tutorial. 详情

  • Avoid deadlock while cancelling a call. (#16440)

  • Subchannel sharing for secure channels now works as expected. (#16438)

  • Allow dot in metadata keys. (#16444)

  • Avoid shutdown crash on iOS. (#16308)

  • Add script for creating a C# package for Unity. (#16208)

  • Add Xamarin example. (#16194)

  • Cleanup and update C# examples. (#16144)

  • Grpc.Core: add support for x86 android emulator. (#16121)

  • Xamarin iOS: Add libgrpc_csharp_ext.a for iOS into Grpc.Core nuget. (#16109)

  • Xamarin support improvements . (#16099)

  • Mark native callbacks with MonoPInvokeCallback. (#16094)

  • Xamarin.Android: add support. (#15969)

Objective-C

  • Make BoringSSL symbols private to gRPC in Obj-C so there is no conflict when linking with OpenSSL. (#16358)

  • Use environment variable to enable CFStream. (#16261)

  • Surface error_string to ObjC users. (#16271)

  • Fix GRPCCall refcounting issue. (#16213)

Python

  • Added support for client-side fork on Linux and Mac by setting the environment variable GRPC_ENABLE_FORK_SUPPORT=1. Applications may fork with active RPCs, as long as no user threads are currently invoking gRPC library methods. In-progress RPCs continue in the parent process, and the child process may use gRPC by creating new channels. (#16264)

  • Improve Pypy compatibility. (#16364)

  • Segmentation fault caused by channel.close() when used with connectivity-state subscriptions. (#16296)

  • Add server reflection guide for Python. 详情

  • Refresh pb2 files in examples/python/multiplex. (#16253)

  • Adding python version environmental markers in the new style. (#16235)

  • Add a matching _unwrap_grpc_arg. (#16197)

  • Add Cython functionality to directly wrap grpc_arg. (#16192)

下载地址:


          Jewelry Gift Idea Gift For Her Snake Ear Wrap Snake Ear Cuff Snake Earring Silver Python Ear Wrap Snake Gift For Woman Snake Jewelry Serpent by martymagic      Cache   Translate Page      

129.00 USD

This solid sterling silver Python Snake Ear Wrap wraps gracefully around the left ear only. Three coils of the snake "cuff" the edge of the ear for added security. This curvacious snake is beautifully detailed down to it's belly scales on the reverse. Slither home with this striking beauty!

It is my Python Snake Ear wrap that Sofia Boutella is wearing in the Mummy Movie, starring Tom Cruise and Russell Crowe: to be released in June 2017.

Our ear wraps require no piercings. The ear wraps have a sturdy wire that wraps behind the ear similar to a "Blue Tooth" headset.

This item usually ships the same or next business day.

All Marty Magic Jewelry is packaged in a beautiful purple box, embossed with the gold foil Marty Magic dragon logo. Perfect for any occasion.

*This is the genuine Python Snake Ear Wrap, designed by Marty Magic in Santa Cruz, California and made in the U.S.A.


          Programmer/Analyst (Research Data Management) - University of Saskatchewan - Saskatoon, SK      Cache   Translate Page      
Java, JavaScript, Python, PHP, HTML, YAML, CSS, Git, Angular, Ansible, Grunt, Jenkins, JIRA, Confluence, Docker, Django.... $62,850 - $98,205 a year
From University of Saskatchewan - Mon, 30 Jul 2018 18:22:24 GMT - View all Saskatoon, SK jobs
          Senior Embedded Software Developer - SED Systems - Saskatoon, SK      Cache   Translate Page      
Familiarity with Matlab, Python, JavaScript, Java, HTML5; The ability to obtain a Secret security clearance and meet the eligibility requirements outlined in...
From SED Systems - Sat, 30 Jun 2018 07:14:09 GMT - View all Saskatoon, SK jobs
          Test Leader - hexatier - Leader, SK      Cache   Translate Page      
Development experience with Python and Java. Experience working with cross-functional teams including engineering, support and senior management is required....
From hexatier - Fri, 20 Jul 2018 09:43:27 GMT - View all Leader, SK jobs
          Cyber Engineer Entry, Mid, Senior, Manager - Dulles, VA - TS/SCI - IntellecTechs, Inc. - Dulles, VA      Cache   Translate Page      
Java, Swing, Hibernate, Struts, JUnit, Perl, Ruby, Python, HTML, C, C++, .NET, ColdFusion, Adobe, Assembly language, etc....
From Indeed - Sun, 12 Aug 2018 23:51:07 GMT - View all Dulles, VA jobs
          Cyber Engineer - TS/SCI Required - Talent Savant - Dulles, VA      Cache   Translate Page      
Java, Swing, Hibernate, Struts, JUnit, Perl, Ruby, Python, HTML, C, C++, .NET, ColdFusion, Adobe, Assembly language, etc....
From Talent Savant - Fri, 27 Jul 2018 06:03:39 GMT - View all Dulles, VA jobs
          Senior Cyber Engineer - TS/SCI Required - Talent Savant - Dulles, VA      Cache   Translate Page      
Java, Swing, Hibernate, Struts, JUnit, Perl, Ruby, Python, HTML, C, C++, .NET, ColdFusion, Adobe, Assembly language, etc. Senior Cyber Engineer....
From Talent Savant - Fri, 27 Jul 2018 05:57:09 GMT - View all Dulles, VA jobs
          Cyber Engineer - Criterion Systems - Sterling, VA      Cache   Translate Page      
Java, Swing, Hibernate, Struts, JUnit, Perl, Ruby, Python, HTML, C, C++, .NET, ColdFusion, Adobe, Assembly language, etc....
From Criterion Systems - Mon, 20 Aug 2018 17:51:49 GMT - View all Sterling, VA jobs
          Cyber Security Engineer - ProSOL Associates - Sterling, VA      Cache   Translate Page      
Java, Swing, Hibernate, Struts, JUnit, Perl, Ruby, python, HTML, C, C++, .NET, ColdFusion, Adobe, Assembly language, etc. ProSol is supporting a U.S....
From ProSOL Associates - Thu, 09 Aug 2018 03:12:29 GMT - View all Sterling, VA jobs
          5 librerie Python per progetti amatoriali      Cache   Translate Page      

Python è un linguaggio molto versatile e viene sfruttato in decine di ambienti e settori diversi. Ovviamente Python può essere molto utile in contesti professionali, ma risulta essere l’ideale anche per progetti “amatoriali”. A questo proposito oggi presenteremo 5 librerie per lo sviluppo di progetti non necessariamente destinati al mercato, magari per dare sfogo alla curiosità o per seguire un hobby.

Dataset

Partiamo da Dataset, questa libreria si occupa di salvare i dati in un database in modo semplice e veloce. Sostanzialmente è un’API che semplifica il lavoro di catalogazione dei dati da suddividere successivamente, anche tramite altri tool. …

The post 5 librerie Python per progetti amatoriali appeared first on Edit.


          Из кода Python для соблюдения политкорректности уберут служебные слова master и slave      Cache   Translate Page      
STFW.Ru: В языке программирования Python, который недавно поднялся на третье место в рейтинге языков программирования TIOBE, вскоре произойдут важные изменения. Из него будут исключены служебные слова master («хозяин») ...
          Princess Bride (The Princess Bride) (1987) de Rob Reiner       Cache   Translate Page      

vlcsnap-2018-09-12-11h30m49s645

Princess Bride ne m'avait pas cloué lors de sa vision initiale, et sa seconde vision ne me fera guère revoir mon jugement à la hausse (je n'ose du coup me repencher un soir de lose sur Quand Harry rencontre Sally, bon souvenir d'adolescence, mais depuis, hein ?). Bah le principe n'est pas mauvais en soi : Colombo va voir son petit fils et plutôt que de lui raconter une enquête à la con, l'envie lui vient de raconter un conte. Mais attention pas n'importe quel conte ultra connu : un truc où il y a certes une princesse blonde (Robin Wright, encore toute sage) et un prince charmant invincible mais avec aussi plein de petites trouvailles originales censées multiplier les surprises et ainsi plaire au gamin. Oui, Reiner sort des sentiers battus, parfois (un humour à froid, des bestiasses rigolotes (n'importe quoi ce gros rat et ces rascasses-anguilles hurlantes), des caméos rigolos). Non, Reiner n'est malheureusement pas un Monty Python : le ton du récit est un peu trop mignon et flirte trop rarement avec le monde de l'absurde ou du délire pure. On sent qu'il y a un effort, je dis pas. Mais un effort qui tombe vite à plat.

vlcsnap-2018-09-12-11h31m24s553

Une princesse est donc kidnappée et son mari, porté disparu depuis cinq ans, vient la secourir et l'enlever des griffes de ce méchant roi. Bien. En route, on croisera des décors plus ou moins travaillés (ça sent souvent le décor un peu kitsch, mais bon ça donne aussi à la chose un léger côté "décalé"), des personnages de seconde zone pas toujours captivants (un Espagnol drôle mais un peu concon, un géant géant mais un peu concon, un sorcier ultramaquillé mais un peu plombant (Billy Crystal, my god - j'ai presque cru un moment que c'était Robin Williams tellement il en faisait des tonnes sous ses quinze tonnes de maquillage), et des "surprises" un peu cheap (la forêt hantée, mouais ; la machine "à prendre les années", bof...). On sourit un brin devant la lâcheté de certains personnages "durs à cuire" ou devant la bêtise stratégique des gentils qui parviennent malgré tout à leur fin. Il y a tentative, disais-je, chez Reiner, de faire dans l'humour bon enfant en jouant des codes pour mieux les pervertir mais cela ne va jamais bien loin - la princesse et le prince finiront d’ailleurs par s'embrasser devant un coucher de soleil, la mort de l'un aurait eu beaucoup plus de panache... Bref un conte qui sort un peu des ornières mais où les tentatives d'humour restent trop grand public. Un divertissement trop gentillet pour vraiment rester dans les mémoires.

vlcsnap-2018-09-12-11h32m42s210

 The Criterion Collection


          Para reflexão      Cache   Translate Page      
Quando a fantochada de 2 + 2 = 22 é levada avante começam os jogos:
"Exposing negative statistics about immigration sparked angry accusations of bigotry"
"sometimes values such as academic freedom and free speech come into conflict with other values to which Penn State was committed
E tudo isto se transforma num sketch dos Monty Python.
          Бывшая модель Victoria’s Seсret Линдси Скотт серьезно увлекается программированием (7 фото)      Cache   Translate Page      
В одном из аккаунтов в Instagram, где публикуются шутки и юмор про тему IT, была опубликована фотография известной американской модели Линдси Скотт. Фотография была подписана: "Эта модель Victoria's Secret умеет программировать на Python, C++, Java, MIPS и Objective-C". Конечно же, реакция пользователей сети не заставила себя долго ждать, ведь мало кто поверил в это, и все стали шутить на эту тему.
          SikuliX - Automate Anything - Python Based Sikuli Scripting      Cache   Translate Page      
SikuliX - Automate Anything - Python Based Sikuli Scripting

Category: Tutorial

Read more
          Mastering Unsupervised Learning with Python [Video]      Cache   Translate Page      

Mastering Unsupervised Learning with Python [Video] English | MP4 | AVC 1920×1080 | AAC 48KHz 2ch | 3h 52m | 834 MB eLearning | Skill level: All Levels

The post Mastering Unsupervised Learning with Python [Video] appeared first on eBookee: Free eBooks & Video Tutorials Download.


          Secret Recipes of the Python Ninja      Cache   Translate Page      

eBook Details: Paperback: 380 pages Publisher: WOW! eBook (May 21, 2018) Language: English ISBN-10: 1788294874 ISBN-13: 978-1788294874 eBook Description: Secret Recipes of the Python Ninja: Test your Python programming skills by solving real-world problems

The post Secret Recipes of the Python Ninja appeared first on eBookee: Free eBooks & Video Tutorials Download.


          Begin to Code with Python      Cache   Translate Page      

eBook Details: Paperback: 528 pages Publisher: WOW! eBook; 1st edition (December 18, 2017) Language: English ISBN-10: 1509304525 ISBN-13: 978-1509304523 eBook Description: Begin to Code with Python: Become a Python programmer and have fun doing it!

The post Begin to Code with Python appeared first on eBookee: Free eBooks & Video Tutorials Download.


          JAVA DEVELOPER Junior (experiencia 6-12 meses) - General Software - Madrid, España      Cache   Translate Page      
En Genral Software contamos con varios proyectos en uno de nuestros clientes estratégicos del sector bancario en los que buscamos nuevos compañeros que tengan una breve experiencia en JAVA para formarlos en diferentes proyectos. Buscamos: Experiencia desarrollo en JAVA (entre 6 y 12 meses) Valorable: experiencia en Python y SQL Ofrecemos: Jornada Flexible Proyecto Estable Plan de Formación (con certificaciones incluidas SCRUM, Cloud...) Plan de carrera Contrato...
          The top four web performance challenges      Cache   Translate Page      

Counting down the charts—what will be in the number one spot?

Danielle and I have been doing some front-end consultancy for a local client recently.

We’ve both been enjoying it a lot—it’s exhausting but rewarding work. So if you’d like us to come in and spend a few days with your company’s dev team, please get in touch.

I’ve certainly enjoyed the opportunity to watch Danielle in action, leading a workshop on refactoring React components in a pattern library. She’s incredibly knowledgable in that area.

I’m clueless when it comes to React, but I really enjoy getting down to the nitty-gritty of browser features—HTML, CSS, and JavaScript APIs. Our skillsets complement one another nicely.

This recent work was what prompted my thoughts around the principles of robustness and least power. We spent a day evaluating a continuum of related front-end concerns: semantics, accessibility, performance, and SEO.

When it came to performance, a lot of the work was around figuring out the most suitable metric to prioritise:

  • time to first byte,
  • time to first render,
  • time to first meaningful paint, or
  • time to first meaningful interaction.

And that doesn’t even cover the more easily-measurable numbers like:

  • overall file size,
  • number of requests, or
  • pagespeed insights score.

One outcome was to realise that there’s a tendency (in performance, accessibility, or SEO) to focus on what’s easily measureable, not because it’s necessarily what matters, but precisely because it is easy to measure.

Then we got down to some nuts’n’bolts technology decisions. I took a step back and looked at the state of performance across the web. I thought it would be fun to rank the most troublesome technologies in order of tricksiness. I came up with a top four list.

Here we go, counting down from four to the number one spot…

4. Web fonts

Coming in at number four, it’s web fonts. Sometimes it’s the combined weight of multiple font files that’s the problem, but more often that not, it’s the perceived performance that suffers (mostly because of when the web fonts appear).

Fortunately there’s a straightforward question to ask in this situation: WWZD—What Would Zach Do?

3. Images

At the number three spot, it’s images. There are more of them and they just seem to be getting bigger all the time. And yet, we have more tools at our disposal than ever—better file formats, and excellent browser support for responsive images. Heck, we’re even getting the ability to lazy load images in HTML now.

So, as with web fonts, it feels like the impact of images on performance can be handled, as long as you give them some time and attention.

2. Our JavaScript

Just missing out on making the top spot is the JavaScript that we send down the pipe to our long-suffering users. There’s nothing wrong with the code itself—I’m sure it’s very good. There’s just too damn much of it. And that’s a real performance bottleneck, especially on mobile.

So stop sending so much JavaScript—a solution as simple as Monty Python’s instructions for playing the flute.

1. Other people’s JavaScript

At number one with a bullet, it’s all the crap that someone else tells us to put on our websites. Analytics. Ads. Trackers. Beacons. “It’s just one little script”, they say. And then that one little script calls in another, and another, and another.

It’s so disheartening when you’ve devoted your time and energy into your web font loading strategy, and optimising your images, and unbundling your JavaScript …only to have someone else’s JavaScript just shit all over your nice performance budget.

Here’s the really annoying thing: when I go to performance conferences, or participate in performance discussions, you know who’s nowhere to be found? The people making those third-party scripts.

The narrative around front-end performance is that it’s up to us developers to take responsibility for how our websites perform. But by far the biggest performance impact comes from third-party scripts.

There is a solution to this, but it’s not a technical one. We could refuse to add overweight (and in many cases, unethical) third-party scripts to the sites we build.

I have many, many issues with Google’s AMP project, but I completely acknowledge that it solves a political problem:

No external JavaScript is allowed in an AMP HTML document. This covers third-party libraries, advertising and tracking scripts. This is A-okay with me.

The reasons given for this ban are related to performance and I agree with them completely. Big bloated JavaScript libraries are one of the biggest performance killers on the web.

But how can we take that lesson from AMP and apply it to all our web pages? If we simply refuse to be the one to add those third-party scripts, we get fired, and somebody else comes in who is willing to poison web pages with third-party scripts. There’s nothing to stop companies doing that.

Unless…

Suppose we were to all make a pact that we would stand in solidarity with any of our fellow developers in that sort of situation. A sort of joining-together. A union, if you will.

There is power in a factory, power in the land, power in the hands of the worker, but it all amounts to nothing if together we don’t stand.

There is power in a union.

This was originally posted on my own site.


          [Когда заняться нечем]Проект Python для соблюдения политкорректности избавляется от терминов "master" и "slave      Cache   Translate Page      

Гвидо ван Россум (Guido van Rossum) поставил точку в споре, возникшем среди разработчиков языка Python из-за изменений, предложенных Виктором Штиннером (Victor Stinner), работающим в Red Hat и входящим с число ключевых разработчиков Python. Виктор предложил вычистить код Python от упоминания слов "master" и "slave", так как их использование является неполиткорректным и ассоциируется с рабством и неравноправием. Несколько лет назад некоторые открытые проекты уже затронула череда подобных переименований, например, в Drupal термины "master" и "slave" были заменены на "primary" и "replica", а в Django и CouchDB на "leader" и "follower".

Предложение вызвало бурную дискуссию, которая привела к расколу сообщества на сторонников и противников переименования. Противники мотивировали свою позицию тем, что не следует смешивать политику и программирование, "master" и "slave" лишь термины, значение которых уже устоялось в компьютерной технике и не имеет ничего общего с одобрением рабства. Кроме того, замена устоявшихся терминов неизбежно вызовет путаницу среди разработчиков и может привести к нарушению обратной совместимости. Также упоминается, что одно дело когда какие-то выражения являются оскорбительными или непонятными, но в случае с "master" и "slave" имеет место лишь неопределенно сформированные представления о политической корректности, мешающие использованию простого английского языка.

Несмотря на намерение уйти с поста великодушного пожизненного диктатора, в спор пришлось вмешаться Гвидо ван Россуму и принять конечное решение. Из пяти коммитов, предложенных при обсуждении переименования "master" и "slave" на parent/main/server и child/worker, в кодовую базу принято четыре. Изменения отразятся в релизе Python 3.8. Одно изменение отклонено, так как затрагивает устоявшуюся терминологию UNIX ptys, используемую другими проектами.

Среди принятых изменений:

"master process" заменён на "parent process";"master option mappings" на "main option mappings";"master pattern object" на "main pattern object";В модуле ssl слово "master" заменено на "server";В pty.spawn() параметр master_read заменён на parent_read;Метод pty.slave_open() переименован в pty.child_open(), но вызов pty.slave_open пока оставлен для обратной совместимости;В os.openpty() и os.forkpty() параметры master_fd/slave_fd переименованы в parent_fd/child_fd;Внутренние переменные master_fd, slave_fd и slave_name переименованы в parent_fd, child_fd и child_name;Опция "--slaveargs" заменена на "--worker-args";Функция run_tests_slave() переименована в run_tests_worker().

Дополнение: Сообщество разработчиков СУБД Redis также обсуждает предложение по избавлению от терминов "master" и "slave". При этом, предлагаются более кардинальные изменения, такие как переименование операции "SLAVEOF" в "REPLICAOF" и настройки "slaveof" в "replicaof" (для сохранения совместимости поддержка "SLAVEOF" будет сохранена в виде опции). Поддержка признака "slave" в командах INFO и ROLE пока будет оставлена, так как связана с большими нарушениями совместимости. Но в будущем планируется предложить альтернативу INFO и заменить в ROLE "slave" на "replica".

Обсуждение поднял Сальвадор Санфилиппо (Salvatore Sanfilippo), создатель СУБД Redis, который не считает, что переименование оправданно, но вынужден реагировать из-за давления со стороны политактивистов, призывающих не использовать Redis из-за применения дискриминационной терминологии.


(https://www.opennet.ru/op...)
          (USA) Principal Data Scientist      Cache   Translate Page      
ROLE SUMMARY The role of the Data Scientist is to discover, describe, diagnose, predict, and solve complex problems through the creation and use of analytic systems and visualization techniques. All levels of Data Scientists use statistical modeling, data mining and visualization techniques to create value that improves patient care. The Data Scientist is also capable of creating and integrating large data sets into useable, actionable information. The Principal Data Scientist – Solutions Architect and Developer provides leadership for the design, development and maintenance of systems architecture and software applications to support clinical analytics, business intelligence, and population health initiatives. Predicts emerging customer needs and develops innovative solutions to meet them. Acts independently to develop systems and software programs to meet the emerging needs of the Memorial Family of Services (MFOS) clinically integrated network. Sets development and programming standards for the analytics team. QUALIFICATIONS AND REQUIREMENTS Education: Bachelor's degree preferred. Equivalent experience may be substituted in lieu of degree. Experience: Experience with systems architecture development and software development. High level of expertise with programming languages such as SQL, SAS or Python. ABOUT US At Virginia Mason Memorial, our vision is to "create healthy communities one person at a time." This means that each member of our award-winning team works to provide our patients, and their families, with the best medical and individual care possible. Why do we do what we do? As the region's leading health care provider and Yakima's largest employer, we believe that by improving health, we will transform Yakima! We work together to demonstrate our values of Respect, Accountability, Teamwork, Stewardship, and Innovation for everyone who walks through our doors, patients and coworkers alike. Does this sound like the place for you? We would love to hear from you! Come be the best of the best with us! We offer competitive benefits and compensation, including a generous 401K plan, medical, dental, vision, and life insurance, an employee wellness clinic and leadership development and education. "Yakima Valley Memorial Hospital provides reasonable accommodations to assist qualified individuals in order to perform the essential duties/requirements their job requires. The description is intended to provide only basic guidelines for meeting job requirements and serves as merely a summary rather than a complete listing of duties. Responsibilities, knowledge, skills, abilities, and working conditions may change as needs evolve. This job description does not constitute a contract as employment is at will."
          (USA) Principal Data Scientist - Signal Health      Cache   Translate Page      
ROLE SUMMARY The role of the Data Scientist is to discover, describe, diagnose, predict, and solve complex problems through the creation and use of analytic systems and visualization techniques. All levels of Data Scientists use statistical modeling, data mining and visualization techniques to create value that improves patient care. The Data Scientist is also capable of creating and integrating large data sets into useable, actionable information. The Principal Data Scientist – Predictive Analytics designs and develops predictive analytics and forecasting projects to meet the emerging needs of the Memorial Family of Services (MFOS) clinically integrated network. Incumbents have a strong statistical and analytic background that allows them to derive insight and value out of data to improve financial, operational, and clinical analytics reporting. QUALIFICATIONS AND REQUIREMENTS Education: Bachelor's degree preferred. Equivalent experience may be substituted in lieu of degree. Experience: Experience with systems architecture development and software development. High level of expertise with programming languages such as SQL, SAS or Python. ABOUT US At Virginia Mason Memorial, our vision is to "create healthy communities one person at a time." This means that each member of our award-winning team works to provide our patients, and their families, with the best medical and individual care possible. Why do we do what we do? As the region's leading health care provider and Yakima's largest employer, we believe that by improving health, we will transform Yakima! We work together to demonstrate our values of Respect, Accountability, Teamwork, Stewardship, and Innovation for everyone who walks through our doors, patients and coworkers alike. Does this sound like the place for you? We would love to hear from you! Come be the best of the best with us! We offer competitive benefits and compensation, including a generous 401K plan, medical, dental, vision, and life insurance, an employee wellness clinic and leadership development and education. "Yakima Valley Memorial Hospital provides reasonable accommodations to assist qualified individuals in order to perform the essential duties/requirements their job requires. The description is intended to provide only basic guidelines for meeting job requirements and serves as merely a summary rather than a complete listing of duties. Responsibilities, knowledge, skills, abilities, and working conditions may change as needs evolve. This job description does not constitute a contract as employment is at will."
          Senior Data Analyst - William E. Wecker Associates, Inc. - Jackson, WY      Cache   Translate Page      
Experience in data analysis and strong computer skills (we use SAS, Stata, R and S-Plus, Python, Perl, Mathematica, and other scientific packages, and standard...
From William E. Wecker Associates, Inc. - Sat, 23 Jun 2018 06:13:20 GMT - View all Jackson, WY jobs
          (USA-OH-Beaver Creek) Cyber Test Engineer - 63428234      Cache   Translate Page      
Cyber Test Engineer - 63428234 Job Code: 63428234Job Location: Beaver Creek, OHCategory: Software EngineeringLast Updated: 09/12/2018Apply Now! Job Description: Candidate will lead the test of cyber operations products to ensure verification of require' validation that the system meets the Concept of Operations. Requirements: Minimum Qualifications and Desirables: • Act as the primary interface with the custom regarding test and verification activities of cyber operations products. • Work with multiple develop stress, functional, acceptance, and ad-hoc tests based on the requirements and operational scenarios • Automate the testing and characterization of cyber products to the possible given budgets, project time frame, and long term benefit of the product • Design, and configure system test components necessary to perform end-to-end testing of cyber Develop scripts as necessary to integrate components, perform new capabilities, facilitate etc. • Characterize cyber operations products under a multitude of configurations • Monitor automated test executions and work with teams to analyze the cause and apply fixes for Generate and present to the end customer various milestone packages including: Softwa Requirement Review, Test Readiness Review, and Acceptance Review • Specify improve test and development network infrastructures • Programing skills in scripting languages s Python, Bash, Expect, Powershell • General understanding of networking protocols • General understanding of virtual machines, specifically VMware Workstation and ESXi • Setup an configuration of networks using VLANS, Switches and Windows and Linux networking. • Experience with protocol analysis using Wireshark or other packet analysis tools • Experience, test automation frameworks, scripting of automated test and capturing and analyzing rest Experience with MySQL or other databases • Python, C/C++/C# development experience Top Secret security clearance For more information, please send your resume and we will get back to you. Equal Opportunity Employer Minorities/Women/Veterans/Disabled
          (USA-OH-Brooklyn) Engineer IV      Cache   Translate Page      
Engineer IVinBrooklyn, OHatKey Bank- Corporate Date Posted:9/12/2018 ApplyNot ready to Apply? Share With: Job Snapshot + Employee Type: Full-Time + Location: 4910 Tiedeman Road Brooklyn, OH + Job Type: Information Technology + Experience: Not Specified + Date Posted: 9/12/2018 Job DescriptionAbout the JobThis is a virtual/remote position which can be performed from most states within the United states. This position is as a member of the Web Systems Services team responsible for the engineering and support of the WebSphere Application Server environment and various other web application and web server platforms at Key.Essential Job FunctionsDesign and implement enterprise class web and application server infrastructure solutions. Provide support for purpose of implementing applications on WebSphere Application Server (WAS) and IBM HTTP Server (IHS) - Interaction and communication with multiple teams - Providing continuous improvement ideas to reduce expenses and/or improve efficiency - Defining high-level application platform architectural guidelines - Creating and maintaining documentation across multiple environments on supported technologiesRequired QualificationsRequired SkillsTechnical - Bachelor s degree or equivalent experience - 3+ years of IT experience - Thorough Linux knowledge - Working understanding of web and application server technologies and concepts - Working knowledge of relevant network technologies (TCP/IP, DNS, SSH, HTTP, FTP, SSL, PKI, Firewall and DMZ concepts) - Working knowledge of network and operating system security concepts - Working knowledge of network administration and problem determination skills - Working knowledge of operating system administration (Unix/Linux) and problem determination skills - Working understanding of Java - Working knowledge of databases and database concepts (Oracle, Microsoft SQL Server, DB2) – Working knowledge of scripting and scripting languages (BASH, Perl, Jython, JRuby)Required SkillsProfessional - Working understanding of highly available/fault tolerant solutions - Ability to learn new skills quickly - Proven analytical/problem solving ability - Highly motivated and self-sufficient - Team player with strong communication and interpersonal skills - Ability to work independently and take ownership of initiatives, projects and assignments - Ability to multi-task and manage competing priorities Preferred Skills: - Familiarity with Web and Application Server Technologies (WebSphere Application Server, WebLogic, Apache, Tomcat, JBOSS) - Experience with client connectivity (MS SQL, MQ, DB2, ECI, ODBC, JDBC, etc.) - Experience performing load tests, capacity planning, and application configuration on the following Operating Systems: Linux (Redhat) - Scripting using PERL, Shell, Python, Jython, etc.ABOUT KEY:KeyCorp's roots trace back 190 years to Albany, New York. Headquartered in Cleveland, Ohio, Key is one of the nation's largest bank-based financial services companies, with assets of approximately $134.5 billion at March 31, 2017. Key provides deposit, lending, cash management, insurance, and investment services to individuals and businesses in 15 states under the name KeyBank National Association through a network of more than 1,200 branches and more than 1,500 ATMs. Key also provides a broad range of sophisticated corporate and investment banking products, such as merger and acquisition advice, public and private debt and equity, syndications, and derivatives to middle market companies in selected industries throughout the United States under the KeyBanc Capital Markets trade name. KeyBank is Member FDIC.ABOUT THE BUSINESS:Key Technology and Operations (KTO) is Key Bank s shared services organization for technology, operational, and servicing functions supporting business partners and clients across all lines of business. Within the overall organization, KTO provides efficient, reliable and secure technology; creates an effective variable cost technology delivery model that maximizes the return on IT spend; orchestrates the efficient use of corporate information and technology assets; and supports innovation that creates competitive distinction. KTO is effective and efficient in payment and deposit servicing, loan servicing, exception and dispute processing, investment and support services, sourcing and procurement, as well as enterprise-wide fraud prevention, investigations and operational support to human resources and the Bank s BSA/AML program.FLSA STATUS:ExemptKeyCorp is an Equal Opportunity and Affirmative Action Employer committed to engaging a diverse workforce and sustaining an inclusive culture. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.JobID: 31630BR
          (USA-CA-San Mateo County) Information Technology Analyst - Human Services Agency (Open & Promotional)      Cache   Translate Page      
Information Technology Analyst - Human Services Agency (Open & Promotional) Print Apply Information Technology Analyst - Human Services Agency (Open & Promotional) Salary $93,662.40 - $117,083.20 Annually Location San Mateo County, CA Job Type Full-Time Department Human Services Agency Job Number V235AK Closing 9/25/2018 11:59 PM Pacific + Description + Benefits + Questions Description San Mateo County's Human Services Agency is seeking qualified Information Technology Analysts. Under general supervision, the Information Technology Analyst will provide technical analytical support for the department information systems. The incumbent will perform difficult and responsible duties in the following general area: system analysis, application development and enhancement, designated and specialized software systems, hardware and software operations, identification of user requirements, and end-user training and support; management of the operation, maintenance, and upgrading of department-specific applications; serve as technical advisor and liaison to vendors, contractors, and department staff on system applications, software products, services and issues; oversee and guide the work of technical support staff participating in system installation and or enhancement projects; and performs related duties as required; maintaining department web-sites. Duties include, but are not limited to, the following: + Manage the operation, maintenance, and upgrading of department-specific applications. + Serve as technical advisor and liaison to vendors, contractors and department staff on system applications, software products, services and issues. + Facilitate communication between staff and vendors regarding system issues. + Provide technical support to users; Identifying, diagnosing, escalating, and resolving system issues; Track issues/incidents in ticketing system; Send regular updates to users on the incident/request status; Coordinate with users and outside vendors, when necessary, to respond to service requests; implement solutions to problems. + Oversee and guide the work of technical support staff participating in system installation and/or enhancement projects. + Monitor systems to ensure adequate performance and maintenance; analyze and recommend enhancements. + Represent San Mateo County in State/Consortium sponsored technical workgroups/committees. + Participate in the development of new applications; test software to ensure compatibility. + Assess and optimize system designs through review and analysis of user needs, customizing systems through system design and administration to meet the changing business needs of the users. + Analyze data processing needs; research and evaluate software on multiple platforms to assist users to meet their departmental goals; assist in developing the evaluation criteria for software. + Conduct feasibility studies; evaluate vendor products; make recommendations based on user requirements and system analysis to ensure adequate planning. + Research and evaluate technology; identify opportunities for improvements through automation. + Create and generate reports and statistics to meet user and program requirements; interface with other departments. + Conduct end-user training. + Guide central IT staff in the implementation and/or upgrade of systems. + Maintaining department websites. + Perform related duties as required. The ideal candidate will: + Have at least five years of increasingly responsible experience in information technology support. + Have proven analytical and problem-solving abilities with disciplined troubleshooting methodologies. + Possess excellent customer service skills and experience in IT support of a diverse user community. + Possess significant experience in at least one of the following: + Application support service experiences, analyzes source documents, database structure and management report requirements with solid knowledge in Change and Release processes. + Project support experience with solid knowledge of completing project task oriented services, addressing the risks and benefits of the environment supported. + Service Desk experience with solid knowledge in ITIL service support and analysis, diagnosis, and problem solving. + Possess a working knowledge of at least one of the following: + State applications such as CalWIN, CalHEERS, MEDS, or CWS/CMS. + Policy and processes for the administration of Human Services Agency programs such as Medi-Cal, CalFresh, CalWORKS, etc. + Microsoft desktop and server products such as Windows 7, Microsoft Office Suites, Server 2008/2012, etc. + Business Intelligent products such as SAP Business Object, IBM Cognos, Oracle Business Intelligence Enterprise Edition, Information Builders WebFocus, etc. + Programming language such as ASP, ASP.net, PowerShell, Python etc. + Have interest and ability to learn technologies, as needed, including departmental-specific software. + Have interest and ability to learn the Human Services Agency business domain necessary to be effective in the Application Support role. + Possess outstanding written and verbal communication skills and the ability to interact with diverse technical and non-technical groups spanning all organizational levels. + Self-motivated with the ability to prioritize, meet deadlines, and manage changing priorities. + Ability to be flexible and work well in a high-pressure production environment with changing priorities and frequent "emergency issues". + Experience working with vendors as a support professional to drive resolution to issues and establish lifecycle management plans. There are currently two vacancies. NOTE: The eligible list generated from this recruitment may be used to fill future extra-help, term, unclassified, and regular classified vacancies. Qualifications Education and Experience: Any combination of education and experience that would likely provide the required knowledge, skills and abilities is qualifying. A typical way to qualify is two years of experience in application development and support. Knowledge of: Data processing principles, concepts and terminology; business mathematics; record keeping and filing principles and practices and job planning and prioritizing techniques; operations, purposes, and functions of the application being supported and information needs and requirements of the systems users. Skill/Ability to: Analyze operational and systems problems, evaluate alternatives and reach sound conclusions; use initiative and sound independent judgment within established procedural guidelines to support automation systems; prepare clear, concise and accurate documentation, instructions, correspondence and other written materials; develop and present comprehensive training material for the appropriate area; maintain accurate records and files; organize work, set priorities and meet critical deadlines; establish and maintain effective working relationships with those contacted in the course of the work; provide support to multiple users of broad based applications; install and support hardware and software used by the department; respond effectively to user problems and needs; plan and coordinate the work of others; communicate effectively, orally and in writing, with both technical and non-technical personnel; analyze user requirements and recommend appropriate data automation products. Application/Examination Anyone may apply. Current San Mateo County and San Mateo County Superior Court employees with at least six months (1040 hours) of continuous service in a classified regular, probationary, SEIU or AFSCME represented extra-help/term position prior to the final filing date will receive five (5) points added to their final passing score on this examination. Responses to the supplemental questions must be submitted in addition to our regular employment application form. The examination will consist of an application screening (weight: pass/fail) based on the candidate's application and responses to the supplemental questions. Candidates who pass the application screening will be invited to a panel interview (weight: 100%). Depending on the number of applicants, an application appraisal of education and experience may be used in place of other examinations, or further evaluation of work experience may be conducted to group applicants by level of qualification. All applicants who meet the minimum qualifications are not guaranteed advancement through any subsequent phase of the examination. All examinations will be given in San Mateo County, California, and applicants must participate at their own expense. IMPORTANT: Applications for this position will only be accepted online. If you are currently on the County's website, you may click the 'Apply Online' button above or below. If you are not on the County's website, please go to http://jobs.smcgov.org to apply. ~ Tentative Recruitment Schedule ~ Final Filing Date: September 25, 2018 Screening Date: September 26, 2018 Combined Panel Interview: October 10, 2018 and/or October 11, 2018 At the County of San Mateo, we take pride in the way our employees bring together their diverse backgrounds, experiences, and perspectives to serve our community's needs. The County is an Equal Opportunity Employer. Analyst: Arlene Cahill (09112018) (Information Technology Analyst - V235) Please visit http://hr.smcgov.org/sites/hr.smcgov.org/files/SEIU.pdf for a complete listing of all benefits for this classification. Benefits are offered to eligible employees of the County of San Mateo. All benefits are subject to change. NOTE: Employees hired on or after January 1, 2013 may be subject to new Pension Reform retirement laws. As an additional benefit, the County offers extensive training and development programs designed to improve skills and enhance career opportunities. Most programs are offered on County time at no cost to you. County employees are also covered by the federal Social Security system and earn benefits for retirement based on salary and time worked. 01 INSTRUCTION: In order for your application to receive every consideration in the selection process, you must complete each of the following Supplemental Questions. This Supplemental Questionnaire is the exam for this job posting, and your responses will be scored against a predetermined formula. Your responses should be consistent with the information in your standard San Mateo County application and may be subject to verification. A resume will not be accepted in lieu of your answers to the questions. + I have read and understand the above instruction. 02 Describe in detail how your education and work experience qualify you for the Information Technology Analyst position. Be specific about where you gained the experience, duties and responsibilities you performed and how long you worked in that capacity. Include any relevant certifications, training or classes you have attended, and any special projects you have participated in that has provided you with the skills and knowledge required for this position. (A resume will NOT be accepted as a substitute for your response.) 03 Describe your experience working as a support professional collaborating with subject matter experts to drive resolution to issues. What was your role and/or duties, any challenges encountered and how you overcome those challenges? 04 Describe your project and/or application support experience. As part of your response, provide an example that demonstrates your understanding of change and release processes. Explain your ability to communicate changes to technical and non-technical stakeholders. 05 Describe a situation at work that characterize your analytical and problem-solving ability. Required Question Agency County of San Mateo Address San Mateo County Human Resources Dept 455 County Center Redwood City, California, 94063-1663 Phone (650) 363-4343 (650) 363-4303 Website http://jobs.smcgov.org Apply Your browser does not support the IFRAME feature, which is required by this web page.
          Twijfel over keuze in programmeer richting      Cache   Translate Page      
Replies: 2 Last poster: t_captain at 13-09-2018 08:02 Topic is Open Java, .NET en python zijn het grootst in back-end, waarschijnlijk in die volgorde. Node.js groeit, vanwege de “single stack” belofte, 1 package manager, 1 toolchain rondom build en CI, meer naadloze integratie zoals server-side prerendering, de ontwikkeling van JS naar een serieuze programmeertaal. Maar het is nog klein en je vindt het vooral bij startups en wat kleinere techbedrijven.
          Développeur Python - Tundra Technical - Québec City, QC      Cache   Translate Page      
*Titre *: Développeur Python *Type *: Contrat 1 an et 7 mois *Lieu *: Ville de Québec *Compétences requises* - Entre 3 à 8 ans d'expérience *Exigences: ...
From Indeed - Tue, 04 Sep 2018 17:45:47 GMT - View all Québec City, QC jobs
          Cyber Engineer Entry, Mid, Senior, Manager - Dulles, VA - TS/SCI - IntellecTechs, Inc. - Dulles, VA      Cache   Translate Page      
Java, Swing, Hibernate, Struts, JUnit, Perl, Ruby, Python, HTML, C, C++, .NET, ColdFusion, Adobe, Assembly language, etc....
From Indeed - Sun, 12 Aug 2018 23:51:07 GMT - View all Dulles, VA jobs
          Cyber Engineer - TS/SCI Required - Talent Savant - Dulles, VA      Cache   Translate Page      
Java, Swing, Hibernate, Struts, JUnit, Perl, Ruby, Python, HTML, C, C++, .NET, ColdFusion, Adobe, Assembly language, etc....
From Talent Savant - Fri, 27 Jul 2018 06:03:39 GMT - View all Dulles, VA jobs
          Senior Cyber Engineer - TS/SCI Required - Talent Savant - Dulles, VA      Cache   Translate Page      
Java, Swing, Hibernate, Struts, JUnit, Perl, Ruby, Python, HTML, C, C++, .NET, ColdFusion, Adobe, Assembly language, etc. Senior Cyber Engineer....
From Talent Savant - Fri, 27 Jul 2018 05:57:09 GMT - View all Dulles, VA jobs
          Cyber Engineer - Criterion Systems - Sterling, VA      Cache   Translate Page      
Java, Swing, Hibernate, Struts, JUnit, Perl, Ruby, Python, HTML, C, C++, .NET, ColdFusion, Adobe, Assembly language, etc....
From Criterion Systems - Mon, 20 Aug 2018 17:51:49 GMT - View all Sterling, VA jobs
          Cyber Security Engineer - ProSOL Associates - Sterling, VA      Cache   Translate Page      
Java, Swing, Hibernate, Struts, JUnit, Perl, Ruby, python, HTML, C, C++, .NET, ColdFusion, Adobe, Assembly language, etc. ProSol is supporting a U.S....
From ProSOL Associates - Thu, 09 Aug 2018 03:12:29 GMT - View all Sterling, VA jobs
          Looking to Brute Force Password on a website      Cache   Translate Page      
Need to brute force a password based on a given list on a website. The passwords are numeric/barcodes. Please message me for more details or bid and I will contact you. Must be completed within the next... (Budget: $30 - $250 USD, Jobs: Python, Software Testing, Web Scraping, Web Security, Website Testing)
          (USA-NY-New York) Senior Software Engineer - Full Stack      Cache   Translate Page      
Want to work with driven and knowledgeable engineers using the latest technology in a friendly, open, and collaborative environment? Expand your knowledge at various levels of a modern, big-data driven, micro service stack, with plenty of room for career growth? At Unified, we empower autonomous teams of engineers to discover creative solutions for real-world problems in the marketing and technology sectors. We take great pride in building quality software that brings joy and priceless business insight to our end-users. We're looking for a talented Senior Software Engineer to bolster our ranks! Could that be you? What you'll do: • Gain valuable technology experience at a rapidly growing big data company • Take on leadership responsibilities, leading projects and promoting high quality standards • Work with a top-notch team of engineers and product managers using Agile development methodologies • Design, build and iterate on novel, cutting-edge software in a young, fast-moving industry • Share accumulated industry knowledge and mentor less experienced engineers • Participate regularly in code reviews • Test your creativity at Unified hack-a-thons Who you are: You're a senior engineer who is constantly learning and honing your skills. You love exploring complex systems to reveal possible architectural improvements. You're friendly and always willing to help others accomplish their goals. You have strong communication skills that enable you to describe technical issues in layman's terms. Must have: • 4+ years of professional development experience in relevant technologies • Willingness to mentor other engineers • Willingness to take ownership of projects • Backend development experience with Python, Golang, or Java • Frontend development experience with JavaScript, HTML, and CSS • React and supporting ecosystem tech, e.g. Redux, React Router, GraphQL • Experience supporting a complex enterprise SaaS software platform • Relational databases, e.g. Amazon RDS, PostgreSQL, MySQL • Microservice architecture design principles • Strong personal commitment to code quality • Integrating with third-party APIs • REST API endpoint development • Unit testing • A cooperative, understanding, open, and friendly demeanor • Excellent communication skills, with a willingness to ask questions • Demonstrated ability to troubleshoot difficult technical issues • A drive to make cool stuff and seek continual self-improvement • Able to multitask in a dynamic, early-stage environment • Able to work independently with minimal supervision Nice to have: • Working with agile methodologies • Experience in the social media or social marketing space • Git and Github, including Github Pull-Request workflows • Running shell commands (OS X or Linux terminal) • Ticketing systems, e.g. JIRA Above and beyond: • Amazon Web Services • CI/CD systems, e.g. Jenkins • Graph databases, e.g. Neo4J • Columnar data stores, e.g. Amazon Redshift, BigQuery • Social networks APIs, e.g. Facebook, Twitter, LinkedIn • Data pipeline and streaming tech, e.g. Apache Kafka, Apache Spark
          (USA-NY-New York) Software Engineer - Full-Stack      Cache   Translate Page      
Want to work with driven and knowledgeable engineers using the latest technology in a friendly, open, and collaborative environment? Expand your knowledge at various levels of a modern, big-data driven, micro service stack, with plenty of room for career growth? At Unified, we empower autonomous teams of engineers to discover creative solutions for real-world problems in the marketing and technology sectors. We take great pride in building quality software that brings joy and priceless business insight to our end-users. We're looking for a talented Software Engineer to bolster our ranks! Could that be you? What you'll do: • Gain valuable technology experience at a rapidly growing big data company • Work with a top-notch team of engineers and product managers using Agile development methodologies • Design, build and iterate on novel, cutting-edge software in a young, fast-moving industry • Participate regularly in code reviews • Test your creativity at Unified hack-a-thons Who you are: You're a software engineer who is eager to get more experience with enterprise-level software development, and constantly learning and honing your skills. You love to learn about large systems and make them better by fixing deficiencies and finding inefficient designs. You're friendly and always willing to help others accomplish their goals. You have strong communication skills that enable you to describe technical issues in layman's terms. Must have: • 1+ years of professional development experience in relevant technologies • Backend development experience with Python, Golang, or Java • Relational databases, e.g. Amazon RDS, PostgreSQL, MySQL • Strong personal commitment to code quality • Integrating with third-party APIs • REST API endpoint development • Unit testing • Working with agile methodologies • A cooperative, understanding, open, and friendly demeanor • Excellent communication skills, with a willingness to ask questions • Demonstrated ability to troubleshoot difficult technical issues • A drive to make cool stuff and seek continual self-improvement • Able to multitask in a dynamic, early-stage environment • Able to work independently with minimal supervision Nice to have: • Frontend development experience with JavaScript, HTML, and CSS • React and supporting ecosystem tech, e.g. Redux, React Router, GraphQL • Experience supporting a complex enterprise SaaS software platform • Experience in the social media or social marketing space • Git and Github, including Github Pull-Request workflows • Running shell commands (OS X or Linux terminal) • Microservice architecture design principles • Ticketing systems, e.g. JIRA Above and beyond: • Amazon Web Services • CI/CD systems, e.g. Jenkins • Graph databases, e.g. Neo4J • Columnar data stores, e.g. Amazon Redshift, BigQuery • Social networks APIs, e.g. Facebook, Twitter, LinkedIn • Data pipeline and streaming tech, e.g. Apache Kafka, Apache Spark
          (USA-IL-Chicago) Aruba Wireless Architect      Cache   Translate Page      
Lead or technical expert in Aruba. Will lead the implementation and support of infrastructure services and involved in the design. Will work with management to understand and prioritize requirements/needs and solve complex technical problems. Develop standards, policies and procedures for the structure and attributes of tools/systems. Prefer candidates in Chicago area but will consider virtual candidates. Must currently reside in the U.S. and not require sponsorship. + Must be an expert with Aruba ClearPass and Aruba WLAN. + At least 7-10 years practical Layer 2 and Layer 3 internetworking and at least 3-5 years practical hands on Aruba WLAN and 802.11 experience and expertise. + 2-5 years practical experience in software programming (Python, JAVA, C, and web applications) and software integration. + Four-year college or university degree or equivalent training and certification. + Ability to design, deploy and troubleshoot IP and wireless networks; which includes enterprise IP networking, Aruba ClearPass, Aruba AirWave, and RF analysis. + Experience in trouble isolation and remediation at layers 1-4 (IP, MAC, RF, and some application level). + Experience and understanding of LAN/WAN architectures and designs; mobile networking, and cloud networking. ID: 2018-1173 External Company Name: Columbia Advisory Group External Company URL: www.columbiaadvisory.com Street: 200 East Randolph Street
          Anaconda配置和使用      Cache   Translate Page      
原来一直使用原生python和pip的方式,换了新电脑,准备折腾下Anaconda。 安装过程就不说了,全程可视化安装,很简单。 安装后用“管理员权限”打开“Anaconda Prompt”命令行,先配置国内镜像源(清华大学) conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/ conda config --add...
          Perth Zoo python Cuddles dies      Cache   Translate Page      
PERTH Zoo has paid tribute to its popular five-metre python Cuddles, which died this week.
          (USA-WI-Madison) Developer Intern      Cache   Translate Page      
TDS Telecom is looking for an intern to join one of our development teams. As a member of a development team you will be assisting with design, development, testing, implementation and maintenance of software solutions. You will use a broad spectrum of technologies including but not limited to: + Python + Java + Django + Flask + React + Docker + SQL and NoSQL databases We are looking for students with strong organizational and communication skills who possess a strong interest in application development and a desire to learn: + Agile methodology + DevOps + Containerization TDS Telecom relies on our dynamic 290+ person IT team to innovate using leading edge technologies that empower us to continually move forward. Our diverse team focuses on the development and support of new products and services, while capitalizing on extensive training opportunities and vendor seminars. Team members have a passion for technology and thrive in a fast-paced, high-tech, opportunity-for-growth, and continuous learning environment. School is top priority so TDS offers very flexible schedules, while requiring you to work at least 15 hours per week. As an organization, TDS knows that our own employees are some of the most talented individuals around which is why approximately 40% of our positions over the past year have been filled by internal candidates. This internship will not only give you the opportunity to apply what you are learning in school in a real-world business setting, but it will give you exposure to many other opportunities within the organization that can provide a long-term career path. At TDS we don’t just invest in the success of our customers; we also invest in the success of our community. Local presence and connection to our community is vital to who we are at TDS. TDS has a laser focus on our customers that is supported by a great culture driven by the TDS mission and shared values. TDS strongly believes in partnering with local communities throughout the country and encourages all employees to participate in volunteer opportunities. In 2017 TDS had 543 employees volunteer and contribute in their neighborhoods. Why Join TDS? Our IT Interns partner with seasoned professionals and provide support and input on high-priority technology projects. An internship at TDS offers you: + Valuable hands-on experience working on multiple projects + Access to a wide variety of software and hardware programs + Competitive compensation + Access to an Intern Employee Resource Group (ERG) with the purpose to support a diverse and inclusive community of interns by providing resources for development, opportunities for internal advancement and forums to connect with other interns. This group meets once a month and allows you to hear from other interns about their projects as well as leaders within the organization about their team. They host networking events to get you connected within the organization with the goal of finding fulltime employment within TDS by graduation. Additionally, there are monthly social events with other interns throughout the Madison area + An employer that places priority on employee development and training with a key focus on internal promotion + Company culture emphasizes ethics & promotes a healthy life/work balance with a business casual work environment + A challenging, fast-paced environment due to the continually evolving nature of the telecommunications industry and related technology + Opportunity to work with talented and dedicated team members committed to each individual’s success + TDS Telecom actively supports many local charities, local university activities, and community events and encourages all employees to participate in volunteer opportunities + Obtain frequent exposure to, and interaction with, management across TDS Telecom + Work with a variety of programming languages + Develop programs and scripts that will be used by TDS' Internet customers + Modify existing software + Create test data, execute tests and verify results + Resolve software problems + Contribute ideas and designs to the project work Required Qualifications + 1+ semester of programming coursework (Java, .NET, Python) + 1+ semester of coursework done in a Unix or Linux environment Other Qualifications + Knowledge of CSS, HTML, Javascript + RMDB experience/knowledge Requisition ID: 2018-12930 External Company Name: Telephone and Data Systems Inc. External Company URL: www.tdsinc.com
          (USA-WI-Madison) SQL Developer Intern      Cache   Translate Page      
TDS Telecom is looking for an intern to join one of our development teams. As a member of a development team you will be assisting with design, development, testing, implementation and maintenance of software solutions. You will use a broad spectrum of technologies including but not limited to: + SQL + COBOL + JCL + Python + Django + Flask We are looking for students with strong organizational and communication skills who possess a strong interest in application development and a desire to learn: + DevOps + Change Management + Structured Programming Concepts + Software testing TDS Telecom relies on our dynamic 290+ person IT team to innovate using leading edge technologies that empower us to continually move forward. Our diverse team focuses on the development and support of new products and services, while capitalizing on extensive training opportunities and vendor seminars. Team members have a passion for technology and thrive in a fast-paced, high-tech, opportunity-for-growth, and continuous learning environment. School is top priority so TDS offers very flexible schedules, while requiring you to work at least 15 hours per week. As an organization, TDS knows that our own employees are some of the most talented individuals around which is why approximately 40% of our positions over the past year have been filled by internal candidates. This internship will not only give you the opportunity to apply what you are learning in school in a real-world business setting, but it will give you exposure to many other opportunities within the organization that can provide a long-term career path. At TDS we don’t just invest in the success of our customers; we also invest in the success of our community. Local presence and connection to our community is vital to who we are at TDS. TDS has a laser focus on our customers that is supported by a great culture driven by the TDS mission and shared values. TDS strongly believes in partnering with local communities throughout the country and encourages all employees to participate in volunteer opportunities. In 2017 TDS had 543 employees volunteer and contribute in their neighborhoods. Why Join TDS? Our IT Interns partner with seasoned professionals and provide support and input on high-priority technology projects. An internship at TDS offers you: + Valuable hands-on experience working on multiple projects + Access to a wide variety of software and hardware programs + Competitive compensation + Access to an Intern Employee Resource Group (ERG) with the purpose to support a diverse and inclusive community of interns by providing resources for development, opportunities for internal advancement and forums to connect with other interns. This group meets once a month and allows you to hear from other interns about their projects as well as leaders within the organization about their team. They host networking events to get you connected within the organization with the goal of finding fulltime employment within TDS by graduation. Additionally, there are monthly social events with other interns throughout the Madison area + An employer that places priority on employee development and training with a key focus on internal promotion + Company culture emphasizes ethics & promotes a healthy life/work balance with a business casual work environment + A challenging, fast-paced environment due to the continually evolving nature of the telecommunications industry and related technology + Opportunity to work with talented and dedicated team members committed to each individual’s success + TDS Telecom actively supports many local charities, local university activities, and community events and encourages all employees to participate in volunteer opportunities + Obtain frequent exposure to, and interaction with, management across TDS Telecom Gain meaningful IT experience while working on TDS' exclusive, mission-critical Billing System. This would include: + Code programs and scripts + Apply modifications to programs + Create test data, execute tests and verify results + Resolve problems, debug, test, and implement production fixes + Contribute ideas and experience during team meetings Required Qualifications + 1+ semester of SQL and relational databases + Knowledge of testing and the desire to learn Other Qualifications + Experience with MicroFocus technologies + Experience with COBOL + Experience with JCL + Experience with Perl + Experience with a Linux based environment Requisition ID: 2018-12932 External Company Name: Telephone and Data Systems Inc. External Company URL: www.tdsinc.com
          (USA-WI-Madison) IT Systems Intern      Cache   Translate Page      
TDS is looking for an intern to join the systems administration team. As a member of the systems team you will be assisting in the deployment of new hardware, assisting with day to day support of running systems, and assisting in the deployment of enhancements to currently running systems. We are looking for students with strong organizational and communication skills who possess a strong interest in IT systems administration.. As a member of the team you will work with other team members to assist in the management of a nationwide large scale network and applications using technologies such as: RedHat Linux, F5 Load balancers, Nagios, Elasticsearch, databases, python. and many more. TDS Telecom relies on our dynamic 290+ person IT team to innovate using leading edge technologies that empower us to continually move forward. Our diverse team focuses on the development and support of new products and services, while capitalizing on extensive training opportunities and vendor seminars. Team members have a passion for technology and thrive in a fast-paced, high-tech, opportunity-for-growth, and continuous learning environment. School is top priority so TDS offers very flexible schedules, while requiring you to work at least 15 hours per week. As an organization, TDS knows that our own employees are some of the most talented individuals around which is why approximately 40% of our positions over the past year have been filled by internal candidates. This internship will not only give you the opportunity to apply what you are learning in school in a real-world business setting, but it will give you exposure to many other opportunities within the organization that can provide a long-term career path. At TDS we don’t just invest in the success of our customers; we also invest in the success of our community. Local presence and connection to our community is vital to who we are at TDS. TDS has a laser focus on our customers that is supported by a great culture driven by the TDS mission and shared values. TDS strongly believes in partnering with local communities throughout the country and encourages all employees to participate in volunteer opportunities. In 2017 TDS had 543 employees volunteer and contribute in their neighborhoods. Why Join TDS? Our IT Interns partner with seasoned professionals and provide support and input on high-priority technology projects. An internship at TDS offers you: + Valuable hands-on experience working on multiple projects + Access to a wide variety of software and hardware programs + Competitive compensation + Access to an Intern Employee Resource Group (ERG) with the purpose to support a diverse and inclusive community of interns by providing resources for development, opportunities for internal advancement and forums to connect with other interns. This group meets once a month and allows you to hear from other interns about their projects as well as leaders within the organization about their team. They host networking events to get you connected within the organization with the goal of finding fulltime employment within TDS by graduation. Additionally, there are monthly social events with other interns throughout the Madison area + An employer that places priority on employee development and training with a key focus on internal promotion + Company culture emphasizes ethics & promotes a healthy life/work balance with a business casual work environment + A challenging, fast-paced environment due to the continually evolving nature of the telecommunications industry and related technology + Opportunity to work with talented and dedicated team members committed to each individual’s success + TDS Telecom actively supports many local charities, local university activities, and community events and encourages all employees to participate in volunteer opportunities + Obtain frequent exposure to, and interaction with, management across TDS Telecom This position will work with cutting-edge technologies to develop and support robust applications in our IT team. This includes: + Working with a variety of platforms, applications and scripting languages + Developing systems and scripts that will be used by TDS internal customers + Modifying and deploying changes to existing systems + Working with a variety of networking platforms and configurations + Resolving systems and application problems + Contributing ideas and designs to project work + Troubleshooting/developing REST, Soap/XML services Required Qualifications + 1+ semester of coursework towards a degree in Computer Science or Engineering + 1+ semester of coursework in Unix or LINUX environment Other Qualifications + Knowledge of scripting languages + Knowledge of tcp/ip networking Requisition ID: 2018-12928 External Company Name: Telephone and Data Systems Inc. External Company URL: www.tdsinc.com
          PYTHON Developer - Klein Management Systems - San Jose, CA      Cache   Translate Page      
Work on the design, implementation, technical support and evaluation of new and existing systems. Machine Learning with Python, Tensorflow, SyntaxNet and R...
From Klein Management Systems - Thu, 09 Aug 2018 17:29:34 GMT - View all San Jose, CA jobs
          Consultant | OpenSystem | Python - OpenSystem - Klein Management Systems - Pleasanton, CA      Cache   Translate Page      
Should be able to design and implement automation and alerting for any SNMP enabled components. Job Description – (Splunk / Shell Scripting / Python / Nagios...
From Klein Management Systems - Thu, 23 Aug 2018 17:28:16 GMT - View all Pleasanton, CA jobs
          Python-Java Developer      Cache   Translate Page      
NY-NEW YORK CITY, A global investment bank is seeking a Python/Java Developer to join their team in New York. This candidate will help the team in creating innovative solutions that advance businesses and careers. This team focuses on improving the design, analytics, development, coding, testing and application programming that goes into creating high quality software and new products.You will build-out and maintai
          Jr-Mid Level Software Developer (Wallingford, CT)      Cache   Translate Page      
CT-Wallingford, Our client is seeking a Jr-Mid Level Software Developer for a contract opportunity in Wallingford, CT! Requirements Bachelor’s Degree in Computer Science, or relevant work experience 1-5 years of professional software development experience 1-5 years developing applications for web services or applications Proficiency in (one or more): Java, C+, Objective-C, Ruby, Python, HTML 5, CSS and JavaScrip
          C++ developer - ExperTech Personnel Services Inc. - Montréal, QC      Cache   Translate Page      
Coding in C++, KSH, Python and T-SQL on Sybase IQ, Sybase ASE and SQL Server. ExperTech is a leading staffing and recruiting company, based in Montreal, QC and...
From ExperTech Personnel Services Inc. - Fri, 31 Aug 2018 18:55:41 GMT - View all Montréal, QC jobs
          Senior System Analyst (System Administrator) - TELUS Health - TELUS Communications - Montréal, QC      Cache   Translate Page      
Bash, Ksh, Python. 5 years programming experience with Bash, KSH, python. Join our team....
From TELUS Communications - Wed, 15 Aug 2018 18:07:59 GMT - View all Montréal, QC jobs
          Infrastructure Operations Support Specialist - NTT DATA Services - Montréal, QC      Cache   Translate Page      
5+ years experience programming with SQL, Regular Expressions, XML, BASH, KSH, Perl and Python. At NTT DATA Services, we know that with the right people on...
From NTT Data - Wed, 01 Aug 2018 20:04:56 GMT - View all Montréal, QC jobs
          Episode 84: #84: The Kill Chain      Cache   Translate Page      

This week Dave and Gunnar talk about: automated swarms, automated chefs, and automated kill chains.

remote_control_cockroach_cyborg

 

Cutting Room Floor

We Give Thanks

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          Episode 70: #70: A TAM We Like      Cache   Translate Page      

This week, Dave and Gunnar talk to Dave Sirrine, Technical Account Manager and delightful human being.

sirrine-pie

We Give Thanks

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          Episode 28: #28: Abandonment Issues      Cache   Translate Page      

This week, Dave and Gunnar talk about Batman, Acxiom as your personal data custodian, the TSA Pre-✓ Class War, and the HACK REACTOR.

RSS Icon#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000 Subscribe via RSS or iTunes.

Abandoned Ferris Wheel

Cutting Room Floor

We Give Thanks

  • Matt Micene for helping us stay technically debt free
  • David A. Wheeler, Josh Davis, and Dan Risacher for advocating open source in the DoD
#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          Electrical Engineer - Apollo Technical LLC - Fort Worth, TX      Cache   Translate Page      
Computer languages, supporting several microcontroller languages including (machine code, Arduino, .NET, ATMEL, Python, PASCAL, C++, Ladder, Function Block)....
From Apollo Technical LLC - Thu, 02 Aug 2018 18:22:07 GMT - View all Fort Worth, TX jobs
          python-csvkit-git      Cache   Translate Page      
A suite of utilities for converting to and working with CSV.
          awscli (1.16.13)      Cache   Translate Page      
The AWS CLI is an open source tool built on top of the AWS SDK for Python (Boto) that provides commands for interacting with AWS services.

          GIS Developer      Cache   Translate Page      
UT-Salt Lake City, Title: GIS Developer Location: Salt Lake City, UT Duration: 12 Months Job Summary: Experienced GIS software developer experienced with ESRI, Python, JavaScript and Oracle environments. Required Skills: Intermediate JavaScript development Intermediate Python/ArcPy development Intermediate Oracle development Experience Intermediate Experience with REST API's Intermediate Experience with JSON Preferr
          move Odoo to a new server      Cache   Translate Page      
Hi, We a running a little Odoo enterprise instance in our small company and would like to swap servers. The current system should be moved to a new server. We also thinking about to update from 10 to the latest and would like to discuss changes in the workflow... (Budget: €18 - €36 EUR, Jobs: ERP, Linux, PHP, Python, Software Architecture)
          Raspberry Pi — игрушка для pet-проекта или микрокомпьютер для highload продукта      Cache   Translate Page      

Привет, меня зовут Иван Некипелов, и я Python Tech Lead в компании по автоматизации кафе, ресторанов и магазинов Poster. Хочу рассказать, как и почему мы решили использовать Raspberry Pi на постоянной основе в коммерческих целях для нашего highload продукта. Наш опыт будет полезен всем тем, кто думает применить Raspberry в узком месте проекта и хочет понять, какие подводные камни могут встретиться на пути.

Что мы делаем

Poster — это SaaS-система автоматизации ресторанного и розничного бизнеса. То, что мы делаем, называют Point of Sale или «касса». Для того, чтобы после работы вы съели с друзьями по бургеру в гастро-пабе, шефу нужно проработать меню и создать технологические карты, кладовщику — узнать, какие продукты заканчиваются на складе, и вовремя купить их, официанту — провести заказ на кассе, а повару — приготовить блюдо. Все эти процессы работают быстро и слаженно благодаря системе автоматизации.

Наш продукт разделен на две части — терминал и админка. Терминал запускается на планшетах iPad и Android или Windows-устройствах в противовес дорогим стационарным системам автоматизации, которые используют громоздкие Windows-моноблоки. Сейчас системой учета Poster пользуются 6800 активных заведений в 65 странах мира.

Как подключить неподключаемое

Каждый ресторан использует много периферийного торгового оборудования: фискальный регистратор, принтер чеков, весы и т. д. Все это жизненно необходимо для основных внутренних процессов заведения, но тянет за собой много бюрократической волокиты: фискальный регистратор должен быть зарегистрирован и сертифицирован налоговой, весы проверены в бюро метрологии и гос. стандартизации. Зачастую производители-монополисты не спешат выпускать новые технологичные модели оборудования, так как успешно продают старые.

Подключить устаревшее оборудование, которое раньше подключалось к моноблокам в лучшем случае по USB, а в худшем по COM-порту, к новым iPad — стало главным вызовом в начале нашей работы. Мы видели несколько вариантов решения этой проблемы:

1. Отказ от iPad :)

Самым простым решением было отказаться от iPad и использовать только Android- или Windows-устройства с возможностью проводного подключения, но это противоречило нашей главной ценности — удобству, надежности и современности решения, поэтому этот вариант отсеяли сразу.

2. Использование еще одного планшета или компьютера на Windows

Мы также рассматривали вариант использовать дополнительный планшет или компьютер на Windows, что позволило бы подключать оборудование к терминалу на Windows и общаться с ним с других терминалов на планшетах. Но это повлекло бы дополнительные расходы для клиентов из-за сложности и дороговизны конструкции.

В итоге решили остановиться на Raspberry Pi и расскажу почему.

Что такое «малина»

Raspberry Pi — это микрокомпьютер размером чуть больше кредитной карты. В нем есть разъемы USB и Ethernet. Работает на базе операционной системы Raspbian. Raspbian — это форк Debian для Raspberry Pi.

Изначально Raspberry был создан для образовательных целей и DIY проектов. Сразу после выпуска этого одноплатника появились тысячи гиков, которые стали собирать на его основе умные дома, автоматические страйкбольные винтовки для охраны дома и гроверы для марихуаны. Также «малину» стали закупать школы в странах третьего мира для обучения детей информатике. На 2017 год было продано более 12,5 миллионов экземпляров этого микрокомпьютера. Многие зарубежные компании начали использовать Raspberry в своих коммерческих проектах. Например, на нем сделаны медиаплеер Slice и GIF-камера OTTO. Но в основном эти проекты были стартапами и продавались с помощью краудфандинговых кампаний на Kickstarter.

OTTO камера на основе Raspberry Pi для съёмки GIF-изображений

Мы же решили использовать микрокомпьютер Raspberry в реально коммерческих целях на постоянной основе. Главной причиной стала дешевизна, надежность и простота решения. Также огромным плюсом конкретно этого микрокомпьютера и операционной системы стало наличие масштабного комьюнити, у которого можно было найти ответы на самые каверзные вопросы.

Как создавали Poster Box

Программно-аппаратный комплекс на основе Raspberry мы назвали Poster Box. Он подключается к локальной сети заведения, а к нему подключаются фискальные или термальные принтеры.

Выбор сервера

Web-сервис решили поднять на Tornado, чтобы работать параллельно с несколькими фискальными регистраторами и принтерами с помощью одного Raspberry.

Драйверы

После выбора сервера встал вопрос поддержки ресторанного оборудования. Так как парк оборудования достаточно большой, необходимо было найти наиболее универсальное решении. Такой «серебряной пулей» для нас стал драйвер компании «АртСофт», который называется «Универсальный драйвер РРО». Он предоставляет возможность с помощью единого интерфейса взаимодействовать со всеми возможными фискальными регистраторами в Украине. Использование уже готовых компонентов помогло максимально быстро запустить продукт. Позже мы самостоятельно написали драйверы для некоторых других моделей, чтобы снизить себестоимость устройства.

Когда масштабировали Poster Box на другие страны, иногда приходилось самостоятельно писать драйверы для фискальных регистраторов. Например, на момент нашего старта в России у Атола, самого крупного производителя фискальных регистраторов, совсем не было библиотек и драйверов, приспособленных для работы на Raspberry, поэтому пришлось самостоятельно закрывать этот вопрос.

Интеграции

Писать интеграции к такому оборудованию — достаточно забавная вещь. Общение с фискальными регистраторами происходит с помощью бинарных протоколов. По сути, отправляешь байтовый массив, получаешь в ответ другой, парсишь и получаешь данные об успехе или неуспехе операции.

ТОП-3 проблем, которые пришлось решить

Проблемы с Poster Box начались после продажи десятого устройства. Я выделил три ключевые проблемы, с которыми столкнулись и которые пришлось оперативно решать:

1. «Постановка на поток»

Самой первой проблемой стала организация потокового производства устройств. По сути, нужно было собрать Raspberry в корпус, установить на него операционную систему, наш софт и проверить его работоспособность. Операция сборки и записи занимала около 40 минут и требовала внимания разработчика. Учитывая крайнюю неэффективность такой работы, мы решили автоматизировать процесс. Для этого написали тулзовину под названием pbox-farm — по сути, cli-утилиту, которая запускалась тоже на Raspberry.

После запуска она находила все USB massStorage устройства и записывала на них образ Raspbian, потом делала chroot в каждую записанную флешку и устанавливала софт. Это позволило значительно ускорить запись — до 40 минут на 4 устройства с минимальным участием разработчика.

2. Дистрибьюция и обновления

Следующей проблемой стала дистрибьюция новых сервисов на устройства и поддержание их в актуальном состоянии. Ни одно из готовых средств доставки кода на клиента не закрывало наши потребности, поэтому пришлось писать свой «велосипед». Для этого мы использовали пакетный менеджер opkg, который изначально разрабатывался для openWRT и был очень легковесным.

Затем мы разработали многоуровневую систему репозиториев для плавного деплоя и написали сервис, который просто один раз за определенный период дергал opkg и обновлял пакеты на Poster Box. Сейчас конфиг на каждом устройстве выглядит примерно так:

src/gz custom http://updateserver.com/ua/ua999
src/gz stable_ua http://updateserver.com/stable/ua
src/gz testing_ua http://updateserver.com/testing/ua
arch any 100
arch arm 200

Существует три канала обновлений: stable, testing, custom. Stable — стабильные версии для всех. Testing — канал, на который выливаются новые релизы примерно для 10% всех клиентов. Custom — специфические пакеты, которые привязаны к конкретному устройству.

3. Настройка удаленного доступа

Поскольку каждый Poster Box физически находится у клиента, прямого доступа к нему у нас не было. Когда к нам стали обращаться клиенты по поводу тех или иных проблем с нашим софтом на «малине», мы толком ничего не могли сделать. Стало ясно, что без удаленного доступа к каждому девайсу работать будет сложно и неэффективно.

Как вариант, мы думали настроить VPN для каждой малины. Но этот способ показался неоправданно сложным. К тому же VPN не всегда стабильно поднимался на всех устройствах.

Поэтому мы решили создать чат-бота на основе XMPP-протокола. На каждом Poster Box установлен сервис-бот, который при включении девайса подключается к XMPP-серверу и становится онлайн для всех его контактов. Если приходит сообщение-команда от контакта из белого списка, он выполняет определенный скрипт на устройстве. Такая схема оказалась очень удобной.

В качестве сервера использовали Prosody. Prosody оказался очень надежным решением и практически не имеет никаких минусов, только требует очень большой лимит системы на открытые файловые дескрипторы.

И еще одна

Есть еще одна проблема, о которой хочется сказать отдельно, — требовательность Raspberry к питанию. Питающее «малину» напряжение должно быть 5 вольт и минимум 2 ампера, но не больше 3. К тому же Raspberry очень боится перепадов напряжения, поскольку в нем нет конденсаторов для безопасного выключения с синхронизацией файловой системы. Поэтому на некоторых точках от перепадов напряжения повреждались загрузочные разделы флешек. Единственный вариант, не меняя платформу, убрать эту проблему — это припаивать к каждому Raspberry конденсаторный шилд, что сильно бы увеличило стоимость и привело к проблеме с дистрибьюцией.

На данный момент мы рекомендуем клиентам ставить Raspberry на бесперебойники или хотя бы хорошие сетевые фильтры, чтобы исключить такие сбои.

Что в итоге

Poster Box — это полезное решение, которое мы смогли быстро запустить благодаря нестандартному, но удачному выбору Raspberry PI. Теперь владельцы заведений могут управлять своим бизнесом с помощью планшетов из любой точки мира в режиме реального времени и не тратить время и деньги на громоздкие стационарные решения.

Безусловно, Raspberry PI имеет свои плюсы и минусы. Но почти все проблемы мы смогли побороть, кроме недостатков самой платформы. Какое-то другое решение, которое бы удовлетворило сейчас наши потребности, не кажется целесообразным.

Сегодня, когда все больше производителей торговой периферии добавляют сетевые интерфейсы в свои продукты, мы постепенно отказываемся от этого дополнительного звена. Но думаю Poster Box будет жить еще очень долго, так как рынок б/у техники, к сожалению, перенасыщен старыми моделями.


          move Odoo to a new server      Cache   Translate Page      
Hi, We a running a little Odoo enterprise instance in our small company and would like to swap servers. The current system should be moved to a new server. We also thinking about to update from 10 to the latest and would like to discuss changes in the workflow... (Budget: €18 - €36 EUR, Jobs: ERP, Linux, PHP, Python, Software Architecture)
          Подкасты о Python: вот все, что мы нашли      Cache   Translate Page      
image#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Запрос “Алиса, что послушать о Python”, заданный Гуглу, скорее всего приведет вас в ступор, на статьи многолетней давности, которые не очень актуальны, либо на давно закрытые темы, которые просто нельзя (или некому) обновить.

Так и родилась идея сделать список тематических аудио-видео-кастов и постараться поддерживать его в актуальном виде. Хотя бы год. Если вы читаете это в 2020-м, тоже стучитесь в личку или пишите о своем подкасте в комментарии — добавим.

13 подкастов и немного Испании
          DevOps Engineer Python Agile Docker      Cache   Translate Page      
DevOps Engineer (Python OO Linux Cloud Kubernetes Docker Jenkins). Utilise your DevOps Engineer skills within a successful data science consultancy that is working with some of the best software vendors in the industry on a range of interesting projects such as Data Lake solutions, Blockchain projects and IoT development. The company cultivate a continuous learning environment enabling you to stay ahead of the game with the latest industry trends and upon starting will enrol you on a course that covers Big Data, DevOps and Data Science allowing you to perform to the best of your ability. As a DevOps Engineer you will be acting as a consultant, travelling to a variety of London based clients and participating in leading edge projects. You will be required to provide hands-on technical expertise for clients utilising the best of Open Source software on premise and in the Cloud. This is the first DevOps hire within the London office meaning you will be able to make the role your own and have the opportunity to take on a leadership position, building a successful team around you. Based in London, you will be joining a friendly and supportive company that will encourage you to continually develop new skills allowing you to reach your full potential. Requirements: *Experience with DevOps culture and Agile project delivery *Software development background using any OO programming language (Java, C++, C#) *Strong Python skills *Experience with containerisation and deployment tools (Docker, Kubernetes, Jenkins) *Good Linux knowledge *Cloud experience *Able to travel to client sites across London *Excellent communication skills As a DevOps Engineer (Python) you can expect to earn a competitive salary (up to £85k) plus benefits. Apply today or call to have a confidential discussion about this DevOps Engineer (Python) role.
          Build a Python library to manage RaspberryPi WiFi      Cache   Translate Page      
I need a Python (3+) library to manage the WiFi in RaspberryPi The starting idea is to have functions to configure the device as AdHoc, join an existing network, list networks, etc. The library must be robust since will run in remote locations... (Budget: $30 - $250 USD, Jobs: C Programming, Linux, Python, Raspberry Pi)
          Build a Python library to manage RaspberryPi WiFi      Cache   Translate Page      
I need a Python (3+) library to manage the WiFi in RaspberryPi The starting idea is to have functions to configure the device as AdHoc, join an existing network, list networks, etc. The library must be robust since will run in remote locations... (Budget: $30 - $250 USD, Jobs: C Programming, Linux, Python, Raspberry Pi)
          Electrical Engineer - Apollo Technical LLC - Fort Worth, TX      Cache   Translate Page      
Computer languages, supporting several microcontroller languages including (machine code, Arduino, .NET, ATMEL, Python, PASCAL, C++, Ladder, Function Block)....
From Apollo Technical LLC - Thu, 02 Aug 2018 18:22:07 GMT - View all Fort Worth, TX jobs
          Re : Response with empty body - is this a problem?      Cache   Translate Page      
The samples are meant to work with example server. The example server sends back a JSON response. The server code is included in the GitHub repo - as PHP, Python, and Go. If you want to do something similar, you can look at those server examples for ideas.

https://github.com/blueimp/jQuery-File-Upload/tree/master/server

The example servers send back some details about the files as stored on the server - type, size, etc. It can also send back some error indication. If you don't need this, don't worry about it. 

- If your server sends back no data, just don't set dataType. Setting dataType as JSON is WRONG unless your server is sending back JSON. There is nothing "silly" about the example code. It's meant to go with the accompanying server.

- Stop setting a header with incorrect contentType for upload. It's probably benign, but why tempt fate? application/octet-stream is WRONG for file upload. This "work around" is not needed, and luckily your server is ignoring the incorrect contentType.

I don't know that the demos are even meant to be examples. "demo" != "example"! It seems they are meant to allow you to try the various features of the plugin, and they've kindly provided a server on the Internet to allow a quick/easy demonstration.

Refer to the documentation:


rather than trying to cut/paste from the demo.

There are an AWFUL LOT of additional server implementations available in the documentation! You will surely find something there that is a fit for your server environment.



          python如何简单的使用pipenv来隔离各个项目之间的包环境      Cache   Translate Page      
在python开发环境中常常会遇到一个运行环境相互兼容的问题,比如说各个项目之间依赖的包版本不同,以及python版本不同而出现无法正常运行的情况。。这个时候我们就要隔离每个项目的运行环境,让每个项目都有一个独立的python环境和包的库。 这个时候我们就可以用到pipenv这个神器了。 首先安装 pip3 install pipenv cd命...
          K-means聚类算法研究与实例实现      Cache   Translate Page      
K-means聚类算法研究与实例实现 (白宁超 2018年9月5日15: 01:20) 导读:k-均值算法(英 […]
          Programmer/Analyst (Research Data Management) - University of Saskatchewan - Saskatoon, SK      Cache   Translate Page      
Java, JavaScript, Python, PHP, HTML, YAML, CSS, Git, Angular, Ansible, Grunt, Jenkins, JIRA, Confluence, Docker, Django.... $62,850 - $98,205 a year
From University of Saskatchewan - Mon, 30 Jul 2018 18:22:24 GMT - View all Saskatoon, SK jobs
          Senior Embedded Software Developer - SED Systems - Saskatoon, SK      Cache   Translate Page      
Familiarity with Matlab, Python, JavaScript, Java, HTML5; The ability to obtain a Secret security clearance and meet the eligibility requirements outlined in...
From SED Systems - Sat, 30 Jun 2018 07:14:09 GMT - View all Saskatoon, SK jobs
          Test Leader - hexatier - Leader, SK      Cache   Translate Page      
Development experience with Python and Java. Experience working with cross-functional teams including engineering, support and senior management is required....
From hexatier - Fri, 20 Jul 2018 09:43:27 GMT - View all Leader, SK jobs
          Install Jupyter Notebook and TensorFlow On Ubuntu 18.04 Server      Cache   Translate Page      

Here is How To Install Jupyter Notebook and TensorFlow On Ubuntu 18.04 Server. It is probably easy to install Anaconda for Python packages.

The post Install Jupyter Notebook and TensorFlow On Ubuntu 18.04 Server appeared first on The Customize Windows.


          Проект Python для соблюдения политкорректности избавляется от терминов "master" и "slave"      Cache   Translate Page      
none
          Backend Developer (Python,Ruby,Go) Digital Company - London      Cache   Translate Page      
Tanta Recruitment - The City, London - Our Client is a Digital ID company, working on an App with 2 million downloads so far. As a Polyglot Developer, you will participate...
          Back-End Software Engineer - Stored E-commerce - Ribeirão Preto, SP      Cache   Translate Page      
Ambiente jovem, descontraído, escritório confortável, videogame, puffs para descanso; A Stored E-commerce está à procura de um desenvolvedor back-end Python...
De Stored E-commerce - Sun, 08 Jul 2018 05:31:12 GMT - Visualizar todas as empregos: Ribeirão Preto, SP
          Software Development Engineer, Big Data - Zillow Group - Seattle, WA      Cache   Translate Page      
Experience with Hive, Spark, Presto, Airflow and or Python a plus. About the team....
From Zillow Group - Fri, 07 Sep 2018 01:05:52 GMT - View all Seattle, WA jobs
          Data Scientist, Zillow Offers - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third party data (think Hive, Presto, SQL Server, Python, R, Tableau) to make strategic recommendations (e.g., improve...
From Zillow Group - Thu, 06 Sep 2018 01:06:28 GMT - View all Seattle, WA jobs
          Python slår C++ og er nummer tre på popularitetsindeks      Cache   Translate Page      
Python forsætter sin sejrsgang blandt popularitetsmålinger, mens sproget leder efter en ny chef.
          零基础入门书籍分享《python与量化投资:从基础到实战》(王小川)      Cache   Translate Page      
最近刚入门python量化,发现很多中文的书都华而不实,只有这本还算讲的好点,网上找了好久才找到电子版,分享给大家。 寻找资源不易,请大家走过路过支持一下,谢谢! 以下为内容简介: 关于《Python与量化投资》:本书分为两大部分,共有7章,前3章为Python基础 ...
          Подкасты о Python: вот все, что мы нашли      Cache   Translate Page      
Запрос “Алиса, что послушать о Python”, заданный Гуглу, скорее всего приведет вас в ступор, на статьи многолетней давности, которые не очень актуальны, либо на давно закрытые темы, которые просто нельзя (или некому) обновить. Так и родилась идея сделать список тематических аудио-видео-кастов и постараться поддерживать его в актуальном виде. Хотя бы год. Если вы читаете это […]
          Python Current Date Time      Cache   Translate Page      

We can usepython datetime module to get the current date and time of the local system.

Copy

from datetime import datetime # Current date time in local system print(datetime.now())

Output: 2018-09-12 14:17:56.456080

Table of Contents

1 Python Current Date 2 Python Current Time 3 Python Current Date Time in timezone pytz 4 Python Pendulum Module Python Current Date

If you are interested only in the date of the local system, you can use the datetime date() method.

Copy

print(datetime.date(datetime.now()))

Output: 2018-09-12

Python Current Time

If you want only time in the local system, use time() method by passing datetime object as an argument.

Copy

print(datetime.time(datetime.now()))

Output: 14:19:46.423440

Python Current Date Time in timezone pytz

Most of the times, we want the date in a specific timezone so that it can be used by others too. Python datetime now() function accepts timezone argument that should be an implementation of tzinfo abstract base class.

Python pytz is one of the popular module that can be used to get the timezone implementations.

You can install this module using the followingPIP command.

Copy

pip install pytz

Let’s look at some examples of using pytz module to get time in specific timezones.

Copy

import pytz utc = pytz.utc pst = pytz.timezone('America/Los_Angeles') ist = pytz.timezone('Asia/Calcutta') print('Current Date Time in UTC =', datetime.now(tz=utc)) print('Current Date Time in PST =', datetime.now(pst)) print('Current Date Time in IST =', datetime.now(ist))

Output:

Copy

Current Date Time in UTC = 2018-09-12 08:57:18.110068+00:00 Current Date Time in PST = 2018-09-12 01:57:18.110106-07:00 Current Date Time in IST = 2018-09-12 14:27:18.110139+05:30

If you want to know all the supported timezone strings, you can print this information using the following command.

Copy

print(pytz.all_timezones)

It will print the list of all the supported timezones by the pytz module.

Python Pendulum Module

Python Pendulum module is another timezone library and according to its documentation, it is faster than the pytz module .

We can install Pendulum module using below PIP command.

Copy

pip install pendulum

You can get list of supported timezone strings from pendulum.timezones attribute.

Let’s look into some examples of getting current date and time information in different time zones using the pendulum module.

Copy

import pendulum utc = pendulum.timezone('UTC') pst = pendulum.timezone('America/Los_Angeles') ist = pendulum.timezone('Asia/Calcutta') print('Current Date Time in UTC =', datetime.now(utc)) print('Current Date Time in PST =', datetime.now(pst)) print('Current Date Time in IST =', datetime.now(ist))

Output:

Copy

Current Date Time in UTC = 2018-09-12 09:07:20.267774+00:00 Current Date Time in PST = 2018-09-12 02:07:20.267806-07:00 Current Date Time in IST = 2018-09-12 14:37:20.267858+05:30

You can checkout complete python script and more Python examples from our GitHub Repository .


          从小白变高手,这7个超实用的Python自动化测试框架请收好!      Cache   Translate Page      

随着技术的进步和自动化技术的出现,市面上出现了一些自动化测试框架。只需要进行一些适用性和效率参数的调整,这些自动化测试框架就能够开箱即用,大大节省了开发时间。而且由于这些框架被广泛使用,他们具有很好的健壮性,并且具有广泛多样的用例集和技术来轻易发现微小的缺陷。今天,我们将看一看常见的 python 自动化测试框架。


从小白变高手,这7个超实用的Python自动化测试框架请收好!

常见的测试框架

1、Unittest

unittest是Python内置的标准类库。它的API跟Java的JUnit、.net的NUnit,C++的CppUnit很相似。

通过继承unittest.TestCase来创建一个测试用例。

举个例:

import unittest

def fun(x):

return x + 1

class MyTest(unittest.TestCase):

def test(self):

self.assertEqual(fun(3), 4)

执行后成功。

但是,如果将期望的结果改成5,则执行的结果如下图所示:


从小白变高手,这7个超实用的Python自动化测试框架请收好!

2、 Doctest

doctest 模块会搜索那些看起来像交互式会话的 Python 代码片段,然后尝试执行并验证结果.即使从没接触过 doctest,我们也可以从这个名字中窥到一丝端倪。“它看起来就像代码里的文档字符串(docstring)一样” 如果你这么想的话,就已经对了一半了。

举个例子:

def square(x):

"""Squares x.

>>> square(2)

4

>>> square(-2)

4

>>> square(5)

25

"""

return x * x

if __name__ == '__main__':

import doctest

doctest.testmod()

当执行该代码后,会执行文档内>>> 后面的测试代码,并与下一行的结果进行比对。执行的结果如下:


从小白变高手,这7个超实用的Python自动化测试框架请收好!

但是,如果我们把结果改一下,square(2)的结果改成5,测试代码如下:

def square(x):

"""Squares x.

>>> square(2)

5

>>> square(-2)

4

>>> square(5)

25

"""

return x * x

if __name__ == '__main__':

import doctest

doctest.testmod()

执行的测试结果如下所示:


从小白变高手,这7个超实用的Python自动化测试框架请收好!

3、py.test

pytest是python的一种单元测试框架,与python自带的unittest测试框架类似,但是比unittest框架使用起来更简洁,效率更高。根据pytest的官方网站介绍,它具有如下特点:

①非常容易上手,入门简单,文档丰富,文档中有很多实例可以参考

②能够支持简单的单元测试和复杂的功能测试

③支持参数化

④执行测试过程中可以将某些测试跳过,或者对某些预期失败的case标记成失败

⑤支持重复执行失败的case

⑥支持运行由nose, unittest编写的测试case

⑦具有很多第三方插件,并且可以自定义扩展

⑧方便的和持续集成工具集成

编写pytest测试样例

编写pytest测试样例非常简单,只需要按照下面的规则(和nose类似):

测试文件以test_开头(以_test结尾也可以)

测试类以Test开头,并且不能带有 init 方法

测试函数以test_开头

断言使用基本的assert即可

example.py


从小白变高手,这7个超实用的Python自动化测试框架请收好!
从小白变高手,这7个超实用的Python自动化测试框架请收好!

setup_class/teardown_class 在当前测试类的开始与结束执行。

setup/treadown 在每个测试方法开始与结束执行。

setup_method/teardown_method 在每个测试方法开始与结束执行,与setup/treadown级别相同。

执行pytest测试样例

执行测试样例的方法很多种,上面第一个实例是直接执行py.test,第二个实例是传递了测试文件给py.test。其实py.test有好多种方法执行测试:


从小白变高手,这7个超实用的Python自动化测试框架请收好!

4、Nose

Nose是对unittest的扩展,使得python的测试更加简单。nose自动发现测试代码并执行,nose提供了大量的插件,比如测试输出的xUnitcompatible,覆盖报表等等。

nose的详细文档: https:// nose.readthedocs.org/en /latest/

nose不是python自带模块,需要用pip安装


从小白变高手,这7个超实用的Python自动化测试框架请收好!

nose相关执行命令:

1、 nosetests h查看所有nose相关命令

2、 nosetests s执行并捕获输出

3、 nosetests with-xunit输出xml结果报告

4、 nosetests -v: 查看nose的运行信息和调试信息

5、 nosetests -w 目录:指定一个目录运行测试

nose 特点:

a) 自动发现测试用例(包含[Tt]est文件以及文件包中包含test的函数)

b) 以test开头的文件

c) 以test开头的函数或方法

d) 以Test开头的类

经过研究发现,nose会自动识别[Tt]est的类、函数、文件或目录,以及TestCase的子类,匹配成功的包、任何python的源文件都会被当做测试用例。

5、tox

最大的特色,是自动最测试环境的管理以及使用多个解析器配置进行测试。

tox的详细文档: http:// testrun.org/tox/latest/

6、Unittest2

是unitest的升级版。对API进行了改善以及更好的诊断语法。

unittest2的详细文档: https:// pypi.python.org/pypi/un ittest2

首先,安装

pip install unittest2

为了以后能在unittest与unittest2之间进行切换,最好的代码编写方式如下:

import unittest2 as unittest

class MyTest(unittest.TestCase):

...

7、mock unittest.

mock是用来测试python的库。在python3.3版本以后,这个是一个标准库。对老版本来说,使用pip install mock进行安装。

mock的精髓在于,你可以使用模拟的对象来替代你的系统的一部分,然后验证后续的执行是否正确。

mock的详细文档: http://www. voidspace.org.uk/python /mock/

总结:

我这篇文章,主要是讲基于 python 语言的自动化测试框架的一些设计思想和基本使用示例。其实工具的使用方法很简单,但是如何利用好这些工具来进行软件生产,则需要其它的计算机技能了。

“软件的自动化测试是有成本的,而且成本不低,基本上相当于在原有的功能开发工程 的基础上再建立一个平行的 测试开发工程 ”。

也就是说,如果你对自动化测试有你的期望值,那么就肯定是要付出相应的代价和精力的。好的东西也是需要优秀的人花大量的时间去完成的。在正式进入到自动化测试的领域之前,先要建立这样的价值观才能在软件测试这条路上走的更远。

欢迎加入 51软件测试大家庭,在这里你将获得【最新行业资讯】,【免费测试工具安装包】,【软件测试技术干货】,【面试求职技巧】... 51与你共同学习,一起成长!期待你的加入: QQ 群: 755431660


          Index of corresponding rows in Pandas DataFrame &lbrack;Python&rsqb; ...      Cache   Translate Page      

I have two Pandas DataFrames ( A and B ) with 2 columns and different number of rows.

They used to be numpy 2D matrices and they both contain integer values.

Is there any way to retrieve the indices of matching rows between those two?

I've been trying isin() or query() or merge() , without success.

This is actually a follow-up to a previous question: I'm trying with pandas dataframes since the original matrices are rather huge.

The desired output, if possible, should be an array (or list) containing in i-th position the row index in B for the i-th row of A . E.g an output list of [1,5,4] means that the first row of A has been found in first row of B , the second row of A has been found in fifth row in B and the third row of A has been found in forth row in B .

i would do it this way:

In [199]: df1.reset_index().merge(df2.reset_index(), on=['a','b']) Out[199]: index_x a b index_y 0 1 9 1 17 1 3 4 0 4

or like this:

In [211]: pd.merge(df1.reset_index(), df2.reset_index(), on=['a','b'], suffixes=['_1','_2']) Out[211]: index_1 a b index_2 0 1 9 1 17 1 3 4 0 4

data:

In [201]: df1 Out[201]: a b 0 1 9 1 9 1 2 8 1 3 4 0 4 2 0 5 2 2 6 2 9 7 1 1 8 4 3 9 0 4 In [202]: df2 Out[202]: a b 0 3 5 1 5 0 2 7 8 3 6 8 4 4 0 5 1 5 6 9 0 7 9 4 8 0 9 9 0 1 10 6 9 11 6 7 12 3 3 13 5 1 14 4 2 15 5 0 16 9 5 17 9 1 18 1 6 19 9 5
          【Python】keras神经网络识别mnist      Cache   Translate Page      

上次用Matlab写过一个识别Mnist的神经网络,地址在: https://www.cnblogs.com/tiandsp/p/9042908.html

这次又用Keras做了一个差不多的,毕竟,现在最流行的项目都是python做的,我也跟一下潮流:)

数据是从本地解析好的图像和标签载入的。

神经网络有两个隐含层,都有512个节点。


import numpy as np
from keras.preprocessing import image
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
# 从文件夹图像与标签文件载入数据
def create_x(filenum, file_dir):
train_x = []
for i in range(filenum):
img = image.load_img(file_dir + str(i) + ".bmp", target_size=(28, 28))
img = img.convert('L')
x = image.img_to_array(img)
train_x.append(x)
train_x = np.array(train_x)
train_x = train_x.astype('float32')
train_x /= 255
return train_x
def create_y(classes, filename):
train_y = []
file = open(filename, "r")
for line in file.readlines():
tmp = []
for j in range(classes):
if j == int(line):
tmp.append(1)
else:
tmp.append(0)
train_y.append(tmp)
file.close()
train_y = np.array(train_y).astype('float32')
return train_y
classes = 10
X_train = create_x(55000, './train/')
X_test = create_x(10000, './test/')
X_train = X_train.reshape(X_train.shape[0], 784)
X_test = X_test.reshape(X_test.shape[0], 784)
Y_train = create_y(classes, 'train.txt')
Y_test = create_y(classes, 'test.txt')
# 从网络下载的数据集直接解析数据
'''
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
X_train, Y_train = mnist.train.images, mnist.train.labels
X_test, Y_test = mnist.test.images, mnist.test.labels
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train = X_train.reshape(55000, 784)
X_test = X_test.reshape(10000, 784)
'''
model = Sequential()
model.add(Dense(512, input_shape=(784,)))
model.add(Activation('relu'))
model.add(Dropout(0.4))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.4))
model.add(Dense(10))
model.add(Activation('softmax'))
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
history = model.fit(X_train, Y_train, batch_size=500, epochs=20, verbose=1, validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, verbose=0)
test_result = model.predict(X_test)
result = np.argmax(test_result, axis = 1)
print(result)
print('Test score:', score[0])
print('Test accuracy:', score[1])

最终在测试集上识别率在98%左右。


【Python】keras神经网络识别mnist

相关测试数据可以在这里 下载 到。


          项目实战!如何用Python生成马赛克画      Cache   Translate Page      

大家知道马赛克画是什么吗?不是动作片里的马赛克哦~~

马赛克画是一张由小图拼成的大图,本文的封面就是我们的效果图,放大看细节,每一块都是一张独立的图片,拼在一起组成一张大图,感觉像是用马赛克拼出来的画,所以叫马赛克画。看到网上的一些马赛克画觉得很酷,于是自己用python实现了一下将一张原图转换成马赛克画。

我们的效果图是这样的
项目实战!如何用Python生成马赛克画

原图是这样的


项目实战!如何用Python生成马赛克画
实现的具体思路是这样

第一步:首先收集一组图片,这些图片会作为大图中的小方格图片。图片越多,最后生成的图片颜色越接近。

第二步:将要转换的图片分割成一个一个小方格图片,像下面这样


项目实战!如何用Python生成马赛克画

第三步:对于每一个小方格图片,取图片集里面最接近的图片替换。所有小方格都替换后,就生成了我们最终的马赛克画。

听上去是不是很简单?

我们来看一下具体的实现步骤,下面是一些核心代码。完整代码可在公众号后台回复“mosaic”获取。

我们的图片集存在images目录下,下面的代码加载目录下所有的图片,并缩放成统一的尺寸


项目实战!如何用Python生成马赛克画

这里load_all_images函数的参数就是统一后的尺寸,tile_row和tile_col分别对应高和宽。

下面的代码对要转换的图片进行分割


项目实战!如何用Python生成马赛克画

我们将要转换的图片分割成一个个小方格,tile_row和tile_col是小方格的高和宽,roi存取小方格中的图片数据。

下 面是计算两张图片相似度的函数


项目实战!如何用Python生成马赛克画

im1和im2是两张图片的数据,图片数据是一个三维的numpy数组,这里我们将三维数组转换成一维数组后,比较两者的欧式距离。之后要找出最相似的图片,只需遍历图片集中所有的图片,找到距离最短的那张图片,去替换原图中的小方格就可以了。

我们再来看一下最终实现的效果
项目实战!如何用Python生成马赛克画

放大图中局部的细节如下


项目实战!如何用Python生成马赛克画

如果对图片的画质不满意,想要更精细的画质,可以考虑在分割的时候把图片分割成更小的方格,不过这样也会增加程序运行的时间。

生成图片的过程比较耗时,考虑到性能原因,原程序中使用多进程的方式并行处理。

【责任编辑:庞桂玉 TEL:(010)68476606】


          Redis/Python被要求更改Master/Slave程序接口名称和描述      Cache   Translate Page      

Master/Slave 是一个在计算机编辑中广泛使用的术语,在Redis用来描述主/从进程。而有些人认为 master-slave 术语被认为具有侵犯性,所以出现了很多呼吁修改的声音。

Redis

Redis作者 antirez 表示他很抱歉 master-slave 这个描述让许多人感到失望,但他不认为这个特定于上下文环境的术语具有侵犯性。所以他在数据库中使用 master-slave 当然不是以任何方式暗示“奴隶制度”。

antirez 还表示,这个看似简单的修改实际上要付出昂贵的代价,并产生兼容性问题。例如:

不能在应用现有的 PR

现在有像 INFO 和 ROLE 这样的命令 ――使用包含slave术语的协议进行回复

术语slave 中的源代码包含 1500 个事件

拥有私人项目并根据需要进行代码合并的人会遇到很多问题

可以看到,冒然进行变动会产生很多问题。而且,现在发布的 Redis 5 候选版本是向后兼容的首个稳定版本。所以这也是需要考虑的一方面。最后 antirez 表达了希望能折中解决问题的建议 ――

短期内的变化:

将 master-slave 架构的描述改为 master-replica

为SLAVEOF 提供别名 REPLICAOF,所以仍然可以使用SLAVEOF,但多了一个选项

保持继续使用 slave 来对INFO 和 ROLE 进行回应,现在目前看来,这仍然是一个重大的破坏性变更

python

就在上周,在 Red Hat 工作的 Python 开发者 Victor Stinner 公开提交了 4 个 PR,希望能将 Python 文档和代码中出现的 "master" 和 "slave" 修改为像 "parent" 和 "worker" 这样的术语,以及对其他类似的术语也进行修改。Victor Stinner 在他的 bug report 中解释说,出于多元化的考虑,尽量避免出现与奴隶制相关的术语反而可能会更好,像 'master' 和 'slave' 这种。他还指出之前就已有关于这个问题的投诉,但都是私下提出的 ―― 以避免引起激烈的争论。


          Make numpy&period;sum &lpar;&rpar; return a matrix sum instead ...      Cache   Translate Page      

I am doing a fairly complicated summation using a matrix with numpy. The shape of the matrix is matrix.shape = (500, 500) and the shape of the array is arr.shape = (25,) . The operation is as follows:

totalsum = np.sum([i * matrix for i in arr])

Here is what I don't understand:

np.sum() is very slow and returns a single float, float64 . Doing the same operation with python's sum.() , i.e.

totalsum2 = sum([i*matrix for i in arr])

Preserves the shape of the matrix. That is, the resulting shape is totalsum2.shape() = (500, 500) . Huh?

I also think it is strange that np.sum() takes longer than sum() , particularly when we are working with numpy ndarrays.

What exactly is going on here? How is np.sum() summing the above values in comparison to sum() ?

I would like np.sum() to preserve the matrix shape. How can I set the dimensions such that np.sum() preserves the matrix size and does not return a single float?

You must call np.sum with the optional axis parameter set to 0 (summation over the axis 0, i.e the one created by your list comprehension)

totalsum = np.sum([i * matrix for i in arr], 0)

Alternatively, you can omit the brackets so np.sum evaluate a generator.

totalsum = np.sum(i * matrix for i in arr)


          Codementor: Load Testing a Django Application using LocustIO      Cache   Translate Page      

Django framework, used for buliding web applications quickly in a clean and efficient manner. As the size of application increases, a common issue faced by all teams is performance of the application. Measuring performance and analysis the areas of improvement is key to deliver a quality product.

LocustIO , an open source tool written in python, is used for load testing of web applications. It is simple and easy to use with web UI to view the test results. It is scalable and can be distributed over multiple machines.

This article demonstrates an example to use locust for load testing of our django web application.

Before starting load testing, we have to decide the pages which we want to test. In our case, we expect users to follow the scenario where they log in, visit different pages and submit CSRF protected forms.

LocustIO helps us in emulating the users performing these tasks on our web application. Basic idea of measuring the performance is make number of request for different tasks and analysis the success and failure of those requests.

Installation pip install locustio

LocustIO supports python 2.x only. Currently there is no support for python 3.x.

Locust File

Locust file is created to simulate the actions of users of the web applications.

from locust import HttpLocust, TaskSet, task class UserActions(TaskSet): def on_start(self): self.login() def login(self) # login to the application response = self.client.get('/accounts/login/') csrftoken = response.cookies['csrftoken'] self.client.post('/accounts/login/', {'username': 'username', 'password': 'password'}, headers={'X-CSRFToken': csrftoken}) @task(1) def index(self): self.client.get('/') for i in range(4): @task(2) def first_page(self): self.client.get('/list_page/') @task(3) def get_second_page(self): self.client.('/create_page/', {'name': 'first_obj'}, headers={'X- CSRFToken': csrftoken}) @task(4) def add_advertiser_api(self): auth_response = self.client.post('/auth/login/', {'username': 'suser', 'password': 'asdf1234'}) auth_token = json.loads(auth_response.text)['token'] jwt_auth_token = 'jwt '+auth_token now = datetime.datetime.now() current_datetime_string = now.strftime("%B %d, %Y") adv_name = 'locust_adv' data = {'name', current_datetime_string} adv_api_response = requests.post('http://127.0.0.1:8000/api/advertiser/', data, headers= {'Authorization': jwt_auth_token}) class ApplicationUser(HttpLocust): task_set = UserActions min_wait = 0 max_wait = 0

In above example, locust file defines set of 4 tasks performed by the user - navigate to home page after login, visiting list page multiple times and submitting a form once.

Parameters, min_wait and max_wait, define the wait time between different user requests.

Run Locust

Navigate to the directory of locustfile.py and run

locust --host=<host_name>

where is the URL of the application

Locust instance runs locally at http://127.0.0.1:8089

When a new test is started, locust web UI prompts to enter number of users to simulate and hatch rate(number of users per second).

We first try to simulate 5 users with hatch rate of 1 user per second and observe the results


Codementor: Load Testing a Django Application using LocustIO

Once a test is started, locustIO executes all tasks and results of success/failure of requests are recorded. These results are displayed in the format shown below:


Codementor: Load Testing a Django Application using LocustIO

As seen from the example above, there is one login request and multiple requests to get a page and submit a form. Since the number of users is less there is no failover.

Now let us increase the number of requests to 1000 users with hatch rate of 500 and see the results


Codementor: Load Testing a Django Application using LocustIO
Codementor: Load Testing a Django Application using LocustIO

As we can that some of the requests for fetching the homepage and posting the form fail in this scenario as the number of users and requests increase. With current set of simulated users, we get failure rate of 7%.

Observations:

Most of the failures are in login. Some of the failures stem from the fact that application prevents multiple login from same account in short interval of time.

Get request for pages has very low failure rate - 3%

Post requests have lower failure rates of less than 2%

We can perform multiple tests for different range of users and with the test results, it can be identified under how much stress the application is capable of performing.

The result produces following data for tests:

Type of requests - related to each task to be simulated.

Name - Name of the task/request.

Number of requests - Total number of requests for a task.

Number of failures - Total number of failed requests.

The median, average, max and min of requests in milliseconds.

Content size - Size of requests data.

Request per second.

We can see the details of failed requests in Failures tab which can be used to indetify the root cause of recurring failures.


Codementor: Load Testing a Django Application using LocustIO

LocustIO provides option to download the results in sheets, however there is no out of the box result visualization feature in form of graphs or charts.

Load tests results can be viewed in JSON format at http://localhost:8089/stats/requests . These requests can be used as input for data visualization using different tools like Tableau, matplotlib etc.

Thus we are able to determine the system performance at different endpoints in very simple and efficient way. We can expand tests to add more scenarios for more endpoints and quickly get the answers.

The article originally appeared on Apcelent Tech Blog .


          Export all tables from the database into a single csv file in python      Cache   Translate Page      

I'm been trying to export all the tables in my database into a single csv file.

I've tried import mysqldb as dbapi import sys import csv import time dbname = 'site-local' user = 'root' host = '127.0.0.1' password = '' date = time.strftime("%d-%m-%Y") file_name = date+'-portal' query='SELECT * FROM site-local;' //<---- I'm stuck here db=dbapi.connect(host=host,user=user,passwd=password) cur=db.cursor() cur.execute(query) result=cur.fetchall() c = csv.writer(open(file_name+'.csv','wb')) c.writerow(result)

I'm a little stuck now, I hope someone can sheds some light base on what I have.

Consider iteratively exporting the SHOW CREATE TABLE (txt files) and SELECT * FROM (csv files) output from all database tables. From your related earlier questions, since you need to migrate databases, you can then run the create table statements (adjusting the MySQL for Postgre syntax such as the ENGINE=InnoDB lines) and then import the data via csv using PostgreSQL's COPY command. Below csv files include table column headers not included in fetchall() .

db = dbapi.connect(host=host,user=user,passwd=password) cur = db.cursor() # RETRIEVE TABLES cur.execute("SHOW TABLES") tables = [] for row in cur.fetchall(): tables.append(row[0]) for t in tables: # CREATE TABLES STATEMENTS cur.execute("SHOW CREATE TABLE `{}`".format(t)) temptxt = '{}_table.txt'.format(t) with open(temptxt, 'w', newline='') as txtfile: txtfile.write(cur.fetchone()[1]) # ONE RECORD FETCH txtfile.close() # SELECT STATEMENTS cur.execute("SELECT * FROM `{}`".format(t)) tempcsv = '{}_data.csv'.format(t) with open(tempcsv, 'w', newline='') as csvfile: writer = csv.writer(csvfile) writer.writerow([i[0] for i in cur.description]) # COLUMN HEADERS for row in cur.fetchall(): writer.writerow(row) csvfile.close() cur.close() db.close()
          Importing Data in Python      Cache   Translate Page      

Recently I finished two courses on data import in python at DataCamp and I was really surprised of the amount of sources that can be used to get data. Here I would like to summarize all those methods and at the same time keen my knowledge. Also I think for others it will be useful as well. So, let’s begin.

There is huge variety of files that can be used as the data source:

flat files―csv, txt, tsv etc. pickled files excel spreadsheets SAS and Stata files HDF5 MATLAB SQL databases web pages APIs Flat files

Flat files―txt, csv―are easy and there are few ways to import them using numpy or pandas.

numpy.recfromcsv ―Load ASCII data stored in a comma-separated file. The returned array is a record array (if usemask=False, see recarray) or a masked record array (if usemask=True, see ma.mrecords.MaskedRecords).

data = np.recfromcsv(file)

numpy.loadtxt ―This function aims to be a fast reader for simply formatted files. The genfromtxt function provides more sophisticated handling of, e.g., lines with missing values.

data = np.loadtxt('file.csv', delimiter=',', skiprows=1, usecols=[0,2])

numpy.genfromtxt ―Load data from a text file, with missing values handled as specified. Much more sophisticated function that has a lot of parameters to control your import.

data = np.genfromtxt('titanic.csv', delimiter=',', names=True, dtype=None)

With pandas it’s even easier―one line and you have your file in a DataFrame ready. Also supports optionally iterating or breaking of the file into chunks.

data = pd.read_csv(file, nrows=5, header=None, sep='\t', comment='#', na_values='Nothing') Pickle

What the hell is pickle? It is used for serializing and de-serializing a Python object structure. Any object in python can be pickled so that it can be saved on disk. What pickle does is that it “serialises” the object first before writing it to file. Pickling is a way to convert a python object (list, dict, etc.) into a character stream. The idea is that this character stream contains all the information necessary to reconstruct the object in another python script. The code below will print a dictionary that was created somewhere and stored in the file―pretty cool, isn’t it?

import pickle with open('data.pkl', 'rb') as file: d = pickle.load(file) print(d) Excel

With pandas.read_excel that reads an Excel table into a pandas DataFrame and has a lot of customization importing data have never been more pleasant (sounds like TV commercial:D). But it is really true―documentation for this function is clear and you are actually able to do whatever you want with that Excel file.

df = pd.read_excel('file.xlsx', sheet_name='sheet1') SAS andStata

SAS stands for Statistical Analysis Software. A SAS data set contains data values that are organized as a table of observations (rows) and variables (columns). To open this type of files and import data from it the code sample below will help:

from sas7bdat import SAS7BDAT with SAS7BDAT('some_data.sas7bdat') as file: df_sas = file.to_data_frame()

Stata is a powerful statistical software that enables users to analyze, manage, and produce graphical visualizations of data. It is primarily used by researchers in the fields of economics, biomedicine, and political science to examine data patterns. Data stored in.dta files and the best way to import it is pandas.read_stata

df = pd.read_stata('file.dta') HDF5

Hierarchical Data Format (HDF) is a set of file formats (HDF4, HDF5) designed to store and organize large amounts of data. HDF5 is a unique technology suite that makes possible the management of extremely large and complex data collections. HDF5 simplifies the file structure to include only two major types of object:

Datasets, which are multidimensional arrays of a homogeneous type Groups, which are container structures which can hold datasets and other groups

This results in a truly hierarchical, filesystem-like data format. In fact, resources in an HDF5 file are even accessed using the POSIX-like syntax /path/to/resource . Metadata is stored in the form of user-defined, named attributes attached to groups and datasets. More complex storage APIs representing images and tables can then be built up using datasets, groups and attributes.

To import HDF5 file we’ll need h5py library. Code sample below made everything easier and totally understandable for me.

import h5py # Load file: data = h5py.File('file.hdf5', 'r') # Print the keys of the file for key in data.keys(): print(key) # Now when we know the keys we can get the HDF5 group group = data['group_name'] # Going one level deeper, check out keys of group for key in group.keys(): print(key) # And so on and so on MATLAB

A lot of people work with MATLAB and store data in.mat files. So what those files are? These files contain list of variables and objects assigned to them in MATLAB workspace. It’s not surprising that it is imported in Python as dictionary in which keys are MATLAB variables and values―objects assigned to these variables. To write and read MATLAB files scipy.io package is used.

import scipy.io mat = scipy.io.loadmat('some_project.mat') print(mat.keys()) Relational Databases

Using drivers to connect to a database we can grab data directly from there. Usually it means: create connection, connect, run the query, fetch the data, close connection. It is possible to do it step by step, but in pandas we have an awesome function that does it for us, so why bother ourselves? It only requires a connection that can be created with sqlalchemy package. Below is the example on connecting to sqlite database engine and getting data from it:

from sqlalchemy import create_engine import pandas as pd # Create engine engine = create_engine('sqlite:///localdb.sqlite') # Execute query and store records in DataFrame df = pd.read_sql_query("select * from table", engine) Data fromWeb

A separate article should be written on this, but I will highlight few things to at least know where to start. First of all, if we have a direct url to a file we can just use standard pandas.read_csv/pandas.read_excel functions specifying it in the parameter “file=”

df = pd.read_csv('https://www.example.com/data.csv', sep=';')

Apart from this, to get data from the web we need to use HTTP protocol and especially GET method (there are a lot of them, but for the import we don’t need more). And package requests does an incredible job doing this. To access a text from the response received by requests.get we just need to use method.text.

import requests r = requests.get('http://www.example.com/some_html_page') print(r.text)

r.text will give us a web-page with all html-tags on it―not very useful, isn’t it? But here is where the fun begins. We have a BeautifulSoup package that can parse that HTML and extract the information we need, in this case all hyperlinks (continuing previous example):

from bs4 import BeautifulSoup html_doc = r.text # Create a BeautifulSoup object from the
          Extract plain text from HTML using Python      Cache   Translate Page      

I'm trying to extract plain text from a website using python. My code is something like this (a slightly modified version of what I found here):

import requests import urllib from bs4 import BeautifulSoup url = "http://www.thelatinlibrary.com/vergil/aen1.shtml" r = requests.get(url) k = r.content file = open('C:\\Users\\Anirudh\\Desktop\\NEW2.txt','w') soup = BeautifulSoup(k) for script in soup(["Script","Style"]): script.exctract() text = soup.get_text file.write(repr(text))

This doesn't seem to work. I'm guessing that beautifulsoup doesn't accept r.content . What can I do to fix this?

This is the error -

UserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("html.parser"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently. The code that caused this warning is on line 8 of the file C:/Users/Anirudh/PycharmProjects/untitled/test/__init__.py. To get rid of this warning, change code that looks like this: BeautifulSoup([your markup]) to this: BeautifulSoup([your markup], "html.parser") markup_type=markup_type)) Traceback (most recent call last): File "C:/Users/Anirudh/PycharmProjects/untitled/test/__init__.py", line 12, in <module> file.write(repr(text)) File "C:\Python34\lib\encodings\cp1252.py", line 19, in encode return codecs.charmap_encode(input,self.errors,encoding_table)[0] UnicodeEncodeError: 'charmap' codec can't encode character '\x97' in position 2130: character maps to <undefined> Process finished with exit code 1

The "error" is a warning, and is of no consequence. Quieten it with soup = BeautifulSoup(k, 'html.parser')

There seems to be a typo script.exctract() The word extract is spelt incorrectly.

The actual error seems to be that the content is a bytestring, but you are writing in text mode. The source contains an em dash. Handling this character is the problem.

You can encode with soup.encode("utf-8") . This means hardcoding the encoding into your script (which is bad). Or try using binary mode for the file open(..., 'wb') , or converting the content to a string before passing it to Beautiful Soup, using the correct encoding for that file, with k = str(r.content,"utf-8") .


          WxPython&colon; global switch of the shortcut key      Cache   Translate Page      

I'm trying to create a hotkey toggle(f12) that will turn on a loop when pressed once then turn that loop off when pressed again. The loop is a mouse click every .5 seconds when toggled on. I found a recipe for a hot keys on the wxpython site and I can get the loop to turn on but can't figure a way to get it to turn off. I tried created a separate key to turn it off without success. The mouse module simulates 1 left mouse click.

Here's my current code:

import wx, win32con, mouse

from time import sleep

class Frameclass(wx.Frame):

def __init__(self, parent, title): super(Frameclass, self).__init__(parent, title=title, size=(400, 200)) self.Centre() self.Show() self.regHotKey() self.Bind(wx.EVT_HOTKEY, self.handleHotKey, id=self.hotKeyId) self.regHotKey2() self.Bind(wx.EVT_HOTKEY, self.handleHotKey2, id=self.hotKeyId2) def regHotKey(self): """ This function registers the hotkey Alt+F12 with id=150 """ self.hotKeyId = 150 self.RegisterHotKey(self.hotKeyId,win32con.MOD_ALT, win32con.VK_F12)#the key to watch for def handleHotKey(self, evt): loop=True print('clicks on') while loop==True: #simulated left mouse click mouse.click() sleep(0.50) x=self.regHotKey2() print(x) if x==False: print('Did it work?') break else: pass ---------------------second keypress hotkey-------- def regHotKey2(self): self.hotKeyId2 = 100 self.RegisterHotKey(self.hotKeyId2,win32con.MOD_ALT, win32con.VK_F11) def handleHotKey2(self, evt): return False loop=False print(loop)

if name ==' main ':

showytitleapp=wx.App() #gotta have one of these in every wxpython program apparently Frameclass(None, title='Rapid Clicks') showytitleapp.MainLoop() #infinite manloop for catching all the program's stuff

Your loop variable is locally scoped inside of handleHotKey . Because regHotKey2 is bound to handleHotKey2 , which is a different listener, the event it generates will never affect the loop within handleHotKey . Besides that, the first line of handleHotKey2 is a return value, which will quit the function before the following two lines are executed.

Out of curiousity, what output does x=self.regHotKey2(); print(x) produce?

Try defining your loop variable at the class level instead of the function level -

def __init__(self, parent, title): ... your original stuff ... self.clicker_loop = False

and then modifying that loop in your handlers -

def handleHotKey(self, evt): self.clicker_loop = True while self.clicker_loop: ... do the thing ... def handleHotKey2(self, evt): self.clicker_loop = False

Please try this and tell me if this works.

And maybe this will toggle the loop from the same hotkey...

def handleHotKey(self, evt): if self.clicker_loop: self.clicker_loop = False else: self.clicker_loop = True


          为什么说Python是Fintech与金融变革的秘密武器      Cache   Translate Page      

人生苦短,不止程序员,python正在吸引来自金融领域大佬们的青睐目光。

金融科技的风口下,无数传统金融人都想从中掘一桶金。你如何找到自己的机会并在金融科技的风口中起飞?

这项新技术风靡全球,但其复杂性难以言喻。

首先,你需要熟悉国家法规,同时与不同服务和机构的合作,连接银行API;其次你需要征服用户的心和信任。为了实现这些目标,你的产品需要兼具高级别安全性,功能性并且贴合业务需求。

所有这些意味着你需要最独特合适的技术,来提供值得信赖的解决方案。

无论背景(市场)如何,每个人都希望自己的钱安全无虞。人们孜孜不倦寻找一种可以持续的金融技术,本文从金融科技行业的角度分析了Python受欢迎的原因。

值得注意的是,现在Python已成为世界上 最流行的编码语言 :开发栈快,语言简单,适合做数据分析,开放库利于API整合等都是它的优势。

矛盾的金融时代

现代金融世界由两个仍然共存的矛盾体组成:

曾几何时,千禧一代掌握了非接触式支付,使用在线银行业务和各种数字金融服务,在生活中自由交易。轻视老派官僚主义的新技术,建立了千禧一代的新世界;

另一个部分则是古老的传统金融世界。令人失望的是,这是一台非常古老而生锈的机器,不能随心所欲地停下来。即使它接受新技术及其对金融的影响,传统的金融体系仍然不认为新技术是一个威胁,也不是一个有价值的竞争对手。

这种不可动摇的机器会在七国集团(G7)最发达的国家找到。所有沉积资金都聚集在那里,同时也聚集着大多数准备运营高科技创业公司的人。改变这样的传统金融系统将是一个巨大的挑战。

例如,德勤2017年的统计数据显示,与金融技术相比,G7的习惯与世界其他地方相反。德勤研究人员指出:令人惊讶的是,在移动支付方面,40%的美国高管认为其对自己行业的影响很小甚至没有影响。在一个规模相对小的抽样调查中,17家美国银行中有7家(约占41%)认为移动钱包和其他支付技术没有影响他们,而36个非银行金融单位中有14个(约占37%)持相同意见。

发展中国家则呈现出截然不同的景象。没有传统金融部门的强硬统治,这为金融科技的成长和发展提供了更多空间。也为人们提供了更多机会和方式,可以轻松地与发达国家合作并获得更安全的回报。老实说,这是金融科技最有吸引力的地方――它消除了金融边界!

新兴技术的使用:被Fintech攻占的金融世界
为什么说Python是Fintech与金融变革的秘密武器

图3:使用新兴技术的情况:G7与其他国家

数据来源:GDSI增长和战略调查问卷,德勤金融中心

七国集团似乎仍然对金融科技持怀疑态度,但是实际上技术在不断改变金融。问题在于这个世界上的一切都变化很快,技术也是如此。它灵活,能够适应新用户的需求。

但这正是千禧一代想要的:新的消费习惯,数字敏感度高,对网络产品的需求,所有这些都是新一代生活方式的一部分。他们不浪费任何时间,并要求全天候保持工作效率。这就是他们随时随地都重视财务自由的原因。

据华尔街日报报道,支付的便利性吸引了那些对技术革新有需求并且生活忙碌的人。移动支付用户大多受过高等教育,并且全职工作,主要是男性,并且有非常活跃的金融行为。与非移动支付的用户相比,他们更有可能拥有银行账户,退休账户,拥有自己的房屋,以及利用汽车贷款和抵押贷款。

相关链接

https://blogs.wsj.com/experts/2018/06/07/the-uncomfortable-relationship-between-mobile-payments-and-financial-literacy/

我们可以得出什么结论呢?

华尔街日报的统计数据显示,移动支付的用户收入高于非移动支付用户,他们的交易活动活跃,懂得财务知识更多,他们使用更多种类的金融产品。与此同时,他们对自己的开支更加粗心,极有可能陷入债务。有时,他们甚至从退休账户中取钱出来。这需要全新的金融科技浪潮中产现出一个简单的工具,来帮助千禧一代管理他们的资金。尽管他们的收入和教育水平很高,但据报道,使用移动支付的千禧一代有更高的财务困境和管理不善的风险。


为什么说Python是Fintech与金融变革的秘密武器

我们的研究发现移动支付用户需要的不仅是移动交易。用户希望能从借助产品来管理短期债务和日常费用,这些将是金融科技产品未来的创新方向。

金融业是一个对新客户需求极度敏感的行业。在数字化的时代更是如此。当同类产品变得过于普及和方便的时候,客户可能会不再使用你的产品服务。怎么防止这种情况呢?公司是否可以创造一款经得起时间考验的产品,陪伴年轻人的财务成长,持续给千禧一代提供服务?就像当前一些金融公司给年轻人提供产品一样。当然抵押贷款,投资和财富管理等金融分支机构也应特别谨慎地创造他们的产品。

回到之前说的话题。为了生存,为了获得大量的追随者和依赖它的客户,公司的技术必须是独特,稳定,安全和定制的,以满足客户的需求。在这一点上,金融科技公司不可能避免得需要与传统的金融和国家机构整合。这就是为什么你必须首先确保合作能完美运行,并且你得在后者眼中看起来像是一个可靠的商业伙伴,他们使用你的技术,而不是别人的技术。可能最糟糕的是,他们抛弃你选择创造属于自己的技术!

Python:Fintech产品的第一语言

那么我们到底需要什么?一个足以对抗全球金融干扰压力的技术,且具有足够的灵活性来应对新世界的挑战与客户日益增长的需求。

对于我们来说,使用Python和Django框架是一个非常好的选择,我们同时发现这个组合带给我们各种可能性。

这里并非试图把Python作为所有问题的解决方案,但只想聊聊Python在金融产品方面的优势。

1.使用Python/Django技术栈可以更快的推向市场。

这很容易理解:通过Python/Django技术栈,你可以非常快速的构建产品(MVP:Model-View-Presenter),进而增加找到适合的产品/市场的机会。

金融科技(Fintech)能够与传统银行和金融竞争和/或合作的唯一方式在于适应变化性与客户的需求,根据客户的想法提供增值服务并进行改进。你的技术必须足够灵活,并为众多的增值服务提供坚实的基础。

Python/Django框架组合符合MVP规范的需求,并能够节省一定开发时间成本。它们的开发基本类似乐高一样――你不需要从头开始开发类似权限或用户管理这样的小模块。你只需要从Python库中 (Numpy,Scipy,Scikit-learn,Statsmodels,Pandas,Matplotlib,Seaborn,等)找到你需要的模块,用于构建自己的MVP。

Django的另一个优点是在MVP架构开发阶段提供了简单的管理面板或CRM――它是内置的;你只需要在你的产品中简单设置。当然在MVP阶段,产品的功能并不完整,但你可以测试并轻松完善功能,因为Django非常灵活。

在MVP架构完成后,此技术栈允许部分代码的调整。也就是说在你完成了MVP架构的功能后,既可以轻松的修改某些代码,也可以增加一些新代码,来满足产品功能的完美运行。

千禧一代习惯在快节奏的世界中生活,他们需要全天候的提高工作效率。他们对其他人以及所使用的服务的期望在于,最大化的透明度与高质量的服务。这也是客户发展如此重要的原因――整整一代人都依赖与此。因此,越早地将产品推向市场,你就能越快地收集用户反馈并改进产品。通过Python开发金融产品可以帮助你更加轻松的完成整个流程。

2.数学和经济学常用Python。

很显然,正是因为有了那些使用Python计算算法和公式的数学家和经济学家,Fintech才会存在。类似R和Matlab语言在经济学家中很少使用,但Python相对而言是最常用的金融编程语言,并且是数据科学的“通用语言”。经济学家使用Python来进行计算,因此很明显将他们的代码与基于Python开发的产品整合起来会更容易。但有时即使只是用同一种语言编写的代码片段也很难集成,这也是为什么技术合作伙伴的存在和相互沟通至关重要。

3.语法简单――协作更加轻松。

大道至简。

Python的简单性和易于理解的语法使得它非常清晰,每个人都可以快速上手。这也是我认为Python会成为“通用语言”只是时间问题。Python的创始者Guido van Rossum证实了我的想法,他将Python描述为“高级编程语言,其核心设计理念在于代码的可读性和允许程序员用几行代码表达思想的语法”。

因此,Python的好处在于不仅对于技术专家很容易理解,连客户也很容易理解。开发过程中双方人员都可以掌握不同程度的技术理解。有了Python,工程师可以更轻松的解释代码,客户也可以更好的了解开发进展。看起来,这是个双赢的过程。

正如经济学家谈及Python时所说的:Python语言的两个主要优点是其简单性和灵活性。它简单的语法和缩进格式使其易于学习、阅读和共享。它的忠诚追随者们,即Python编程高手(Pythonistas),已经上传了145,000个定制数据包到在线库中。这些数据包涵盖了从游戏开发到天文学等的所有内容,并且可以在几秒内完成安装,并应用在Python程序中。

这也引出了下一要点。

4.Python的开放库包括用于API集成的工具。

感谢Python的开放库,你无须从头开发工具,并可以在最短时间内完成产品开发并分析大量数据。如果你处于MVP开发阶段,这些开放库可以为你节省大量的时间和金钱。

正如我之前所提到的,Fintech产品需要与大量第三方产品进行集成。Python库可以帮助你的产品更加容易与其他系统通过不同的API(接口)集成。在金融方面,API可以帮助你收集和分析关于用户、房地产和机构的所需数据。例如,在英国,你可以通过API获取人们的信用记录,这也是进行深入金融操作的必经步骤。通过使用在线抵押贷款行业的API,你可以检查房地产数据,并验证某人的身份。最重要的是,你可以一键查询或过滤数据,而无需使用和组合不同的库/包来开发新的工具。

以Django Stars(一家软件开发公司)为例,使用Django Rest架构来构建API或与外部API集成,同时使用Celery(Python 并行分布式框架)来完成队列或分发任务。

5.Python流行度日益增长,人才储备充足。

根据HackerRank2018开发者技能报告显示,Python成为编程人员需要学习的第二语言,并且是金融服务业以及其他发展行业的排名前三语言之一。

这是很好的趋势,因为Python将继续发展,并有更多的专家参与进来,这些情况表明将有足够的人才会在未来能够继续开发和维护我们的产品。


为什么说Python是Fintech与金融变革的秘密武器

根据我们的Love-Hate指数,Python已经赢得了所有年龄段开发者的心。Python也是开发人员想要学习的最流行的语言,并且绝大多数人都知道它。

―HackerRank

Python的用途比你想象的要多:从传统软件类似web开发到最前沿技术,如AI。它兼具灵活性与功能多样性,并且拥有超过125,000个第三方Python库可以让你像乐高一样构建产品。它同时是数据分析的首选语言,这也让它对于商业等非技术领域具有吸引力,Python同时也是金融分析的最佳编程语言。

再次强调,我并不是说Python是唯一的解决方案。我只是就我自己的经验而谈,Python非常成功。我发现Python与Django结合起来使用确实非常棒。

这也是你构建Fintech产品所需要的――一个超级工具能够帮助你的产品赢得信赖,完全安全并且功能实用。遵守国家法律,完美与其他服务、机构以及银行API集成整合――所有这一切都需要关注软件的细节和生命周期,这样才能为未来的接管者――新的千禧一代所服务。继而登上顶峰,成为改变金融市场的人之一,或者更进一步,改变整个世界。独特、高效,以用户为导向,着眼未来做开发。这就是Python的全部意义之所在。


          Python写一个自己的RFM模型      Cache   Translate Page      

最近在搞网站数据化运营,过去一直用SPSS做用户价值细分,导入数据点点就完成了。

然而,当面临个性化的需求时,这种定制化的工具就满足不了啦!

鄙人不才,略懂python,哈哈,那么接下来我们用numpy和pandas两个包来写一下自己的RFM模型。

对于RFM的R、F、M分别是什么,同学们自行百度吧,我偷偷懒,直接给大家详细的讲解代码。

对于Python常用的包,天善智能的许多课程里面都有介绍,我就不Up嗦了,直接上代码:


Python写一个自己的RFM模型
Python写一个自己的RFM模型
Python写一个自己的RFM模型
Python写一个自己的RFM模型
Python写一个自己的RFM模型
Python写一个自己的RFM模型
Python写一个自己的RFM模型
Python写一个自己的RFM模型
Python写一个自己的RFM模型
Python写一个自己的RFM模型
Python写一个自己的RFM模型
Python写一个自己的RFM模型
Python写一个自己的RFM模型
Python写一个自己的RFM模型
Python写一个自己的RFM模型

本文由马修 创作,采用 知识共享署名-相同方式共享 3.0 中国大陆许可协议 进行许可。

转载、引用前需联系作者,并署名作者且注明文章出处。

本站文章版权归原作者及原出处所有 。内容为作者个人观点, 并不代表本站赞同其观点和对其真实性负责。本站是一个个人学习交流的平台,并不用于任何商业目的,如果有任何问题,请及时联系我们,我们将根据著作权人的要求,立即更正或者删除有关内容。本站拥有对此声明的最终解释权。


          Support Python Linux Joystick&quest;      Cache   Translate Page      
How can I use an analog joystick in python on linux? I come from a C++ background, where I used joystick.h to read events from /dev/input/js[x]. Is there a python wrapper around this I can use, perhaps? I don't really want to have to use a huge library like pyGame or SDL?

There is evdev , It's only for Linux, and it seems to be able to do much more than just handling joystick. I've never tried it, though.

I spent some time looking for a library to only read joystick in a cross-platform way, but didn't find any, and I've ended up with pygame (only initializing joystick and event modules) in my projects.


          Using JSON config files in Python      Cache   Translate Page      
Do you want to load config values at run time in python? After reading this tutorial you have learned how to create a JSON configuration file, load it with Python and how to access values from it.

Let’s assume your app needs variables width and height and you want your users to be able to change the values in a config file.

Step 1: Create JSON config file Create empty text file config.JSON Add a width and height value in JSON notation like this: { "width" : 1024, "height" : 768 } Step 2: Create Python script Create empty text file loadconfig.py and put it in the same folder as config.JSON To open config.JSON , you can use Pythons open function . To use the json load function, you need to import the json module. json.load returns a dictionary that can be accessed by its keys. The syntax is data['width'] .

Here is the code:

import json with open('config.json') as config_file: data = json.load(config_file) width = data['width'] height = data['height'] print(width) print(height) Execute

Open a terminal and execute the script with python3 loadconfig.py . The result should look like this:


          Redis之后 Python的master-slave用词亦恐被无奈修改      Cache   Translate Page      

前两天,我们报道了一篇关于 Redis 的新闻,因为 Redis 中的 master-slave 术语被认为具有侵犯性,所以出现了很多呼吁修改的声音。最终,Redis 作者迫于无奈,在尽量不影响项目的情况下,做了一些妥协。 而如今,这项带有政治色彩的“运动”蔓延到了 python 身上,就连宣布退出 Python 核心开发组决策层的 Guido van Rossum 也被 请回来 解决关于政治不正确的语言辩论。

Guido van Rossum 是 Python 创始人,素有“终身仁慈独裁者(BDFL)”之称,不过他现在的处境就像教父中黑手党柯里昂家族首领的Michael Corleone 一样 。

和其他开源社区一样,Python 的管理员也被问及是否真的想继续使用 master 和 slave 术语来描述相关的技术操作和关系,因为这些单词会让一部分人想起美国旧时的 黑奴制度 ,这是一个历史遗留问题而且直到今天依然会引起关于政治方面的激烈争论。

就在上周,在 Red Hat 工作的 Python 开发 者 Victor Stinner 公开提交了 4 个 PR ,希望能将Python 文档和代码中出现的 "master" 和"slave" 修改为像 "parent" 和 "worker" 这样的术语,以及对其他类似的术语也进行修改。Victor Stinner 在他的 bug report 中解释说,出于多元化的考虑,尽量避免出现与奴隶制相关的术语反而可能会更好,像 'master' 和 'slave' 这种。他还指出之前就已有关于这个问题的投诉,但都是私下提出的 ―― 以避免引起激烈的争论。

等到 Python 3.8 发布时,相信像这些被认为具有“侵犯性”的术语将会减少。

事实上,在技术圈子里,这种关于政治正确的事屡见不鲜。前两天关于 Redis 的这件事,社区去年就因这个问题进行了 激烈的争论 。而在 2014 年,Drupal 在经过一番论证之后,将 "master" 和 "slave" 这两个词换成了 "primary" 和 "replica" 。同年,Django 用 "leader" 和 "follower" 代替了 "master" 和 "slave"。CouchDB 也在 2014 年进行了类似的语言描述方面的清理。

这些争论在科技行业也仍然存在。2004年,观察组织 ―― 全球语言监测组织将科技行业中 "master" 和 "slave" 的使用列为当年政治最不正确的术语。而这些术语的行业用法可追溯到几十年前,我们甚至可以在多个 RFC 中找到它,例如 RFC 977 (1986)。

而这次关于 Python 的讨论,我们可以预见到,参与讨论的每个 Python 开发者都不会同意 Stinner 提出的变更。Stinner 提交的 bug report 中的评论回应了关于这一主题的所有其他在线争论。

“我不会因为 Python 根据秘密评论改变其行为而感到激动”,Larry Hastings 感叹道,“传统上,Python 有一个非常开放的治理模式,所有讨论都是在公开场合进行的。” “是否真的有必要用 SJW(Social Justice Warrior) 的意识形态/术语来“污染” Python 代码库?”Gabriel Marko 质疑道,“那么接下来又会是什么?” Raymond Hettinger 也对这些术语是否真的有明显伤害他人感到疑问。他在评论中写道:“如果一个特定的段落表述不清楚或令人反感,这确实应该被修改;否则,我们不应该让含糊不清的政治正确观念影响其他明确的常见英语用法。而且据我所知,没有一个案例表明,在文档中使用了'master'就是为了反映奴隶制这件事,或者暗含对这一概念的认可。”

最后,van Rossum 介入了这场争论,以结束一场似乎是无解的讨论。他在评论中 写到 :“我正在关闭这些 PR,Victor 的 PR 中有四分之三已被合并。但第四个不应被合并,因为它是对UNIX ptys 底层术语的反映。还有一个关于 'pliant children' -> 'helpers' 的讨论,这个后续可以作为 PR 处理,而不需要保持开放讨论的状态。”

I'm closing this now. Three out of four of Victor's PRs have been merged. The fourth one should not be merged because it reflects the underlying terminology of UNIX ptys. There's a remaining quibble about "pliant children" -> "helpers" but that can be dealt with as a follow-up PR without keeping this discussion open.

然而,我们都应该明白,要摆脱真正的 master 和 slave ,绝不仅仅是一件提交 pull request 就能解决的事。

相关文章:

Redis 作者被迫修改 master-slave 架构的描述


          Multi-Person Pose Estimation in OpenCV using OpenPose      Cache   Translate Page      

In ourprevious post, we used the OpenPose model to perform Human Pose Estimation for a single person. In this post, we will discuss how to perform multi-person pose estimation.

When there are multiple people in a photo, pose estimation produces multiple independent keypoints. We need to figure out which set of keypoints belong to the same person.

We will be using the 18 point model trained on the COCO dataset for this article. The keypoints along with their numbering used by the COCO Dataset is given below:

COCO Output Format

Nose 0, Neck 1, Right Shoulder 2, Right Elbow 3, Right Wrist 4,

Left Shoulder 5, Left Elbow 6, Left Wrist 7, Right Hip 8,

Right Knee 9, Right Ankle 10, Left Hip 11, Left Knee 12,

LAnkle 13, Right Eye 14, Left Eye 15, Right Ear 16,

Left Ear 17, Background 18

1. Network Architecture

The OpenPose architecture is shown below. Click to enlarge the image.


Multi-Person Pose Estimation in OpenCV using OpenPose
Figure 1: Multi-Person Pose Estimation model architecture

The model takes as input a color image of size h x w and produces, as output, an array of matrices which consists of the confidence maps of Keypoints and Part Affinity Heatmaps for each keypoint pair. The above network architecture consists of two stages as explained below:

Stage 0 : The first 10 layers of the VGGNet are used to create feature maps for the input image. Stage 1 : A 2-branch multi-stage CNN is used where The first branch predicts a set of 2D Confidence Maps

(S) of body part locations ( e.g. elbow, knee etc.). A Confidence Map is a grayscale image which has a high value at locations where the likelihood of a certain body part is high. For example, the Confidence Map for the Left Shoulder is shown in Figure 2 below. It has high values at all locations where the there is a left shoulder.

For the 18 point model, the first 19 matrices of the output correspond to the Confidence Maps.


Multi-Person Pose Estimation in OpenCV using OpenPose
Figure 2 : Showing confidence maps for Left Shoulder for the given image The second branch predicts a set of 2D vector fields (L) of Part Affinities (PAF), which encode the degree of association between parts (keypoints). The 20th to 57th matrices are the PAF matrices. In the figure below part affinity between the Neck and Left shoulder is shown. Notice there is a large affinity between parts belonging to the same person.
Multi-Person Pose Estimation in OpenCV using OpenPose
Figure 3 : Showing Part Affinity maps for Neck Left Shoulder pair for the given image

The Confidence Maps are used to find the keypoints and the Affinity Maps are used to get the valid connections between the keypoints.

Please download code using the link below to follow along with the tutorial.

I would like to thank my teammate Chandrashekara Keralapura for writing the C++ version of the code.

Download Code

To easily follow along this tutorial, please download code by clicking on the button below. It’s FREE!

Download Code

2. Download Model Weights

Use the getModels.sh file provided with the code to download the model weights file. Note that the configuration proto files are already present in the folders.

From the command line, execute the following from the downloaded folder.

sudo chmod a+x getModels.sh ./getModels.sh

Check the folders to ensure that the model binaries (.caffemodel files ) have been downloaded. If you are not able to run the above script, then you can download the model by clicking here . Once you download the weight file, put it in the “pose/coco/” folder.

3. Step 1: Generate output from image 3.1. Load Network python protoFile = "pose/coco/pose_deploy_linevec.prototxt" weightsFile = "pose/coco/pose_iter_440000.caffemodel" net = cv2.dnn.readNetFromCaffe(protoFile, weightsFile) C++ cv::dnn::Net inputNet = cv::dnn::readNetFromCaffe("./pose/coco/pose_deploy_linevec.prototxt","./pose/coco/pose_iter_440000.caffemodel"); 3.2. Load Image and create input blob Python image1 = cv2.imread("group.jpg") # Fix the input Height and get the width according to the Aspect Ratio inHeight = 368 inWidth = int((inHeight/frameHeight)*frameWidth) inpBlob = cv2.dnn.blobFromImage(image1, 1.0 / 255, (inWidth, inHeight), (0, 0, 0), swapRB=False, crop=False) C++ std::string inputFile = "./group.jpg"; if(argc > 1){ inputFile = std::string(argv[1]); } cv::Mat input = cv::imread(inputFile,CV_LOAD_IMAGE_COLOR); cv::Mat inputBlob = cv::dnn::blobFromImage(input,1.0/255.0, cv::Size((int)((368*input.cols)/input.rows),368), cv::Scalar(0,0,0),false,false); 3.3. Forward pass through the Net Python net.setInput(inpBlob) output = net.forward() C++ inputNet.setInput(inputBlob); cv::Mat netOutputBlob = inputNet.forward(); 3.4. Sample Output

We first resize the output to the same size as that of the input. Then we check the confidence map corresponding to the nose keypoint. You can also use cv2.addWeighted function for alpha blending the probMap on the image.

i = 0 probMap = output[0, i, :, :] probMap = cv2.resize(probMap, (frameWidth, frameHeight)) plt.imshow(cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)) plt.imshow(probMap, alpha=0.6)
Multi-Person Pose Estimation in OpenCV using OpenPose
Figure 4 : Showing the confidence map corresponding to the nose Keypoint. 4. Step 2: Detection of keypoints

As seen from the above figure, the zeroth matrix gives the confidence map for the nose. Similarly, the first Matrix corresponds to the neck and so on. As discussed in our previous post, for a single person, it is very easy to find the location of each keypoint just by finding the maximum of the confidence map. But for multi-person scenario, we can’t do this.

NOTE:The explanation and code snippets in this section belong to the getKeypoints() function.

For every keypoint, we apply a threshold ( 0.1 in this case ) to the confidence map.

Python mapSmooth = cv2.GaussianBlur(probMap,(3,3),0,0) mapMask = np.uint8(mapSmooth>threshold) C++ cv::Mat smoothProbMap; cv::GaussianBlur( probMap, smoothProbMap, cv::Size( 3, 3 ), 0, 0 ); cv::Mat maskedProbMap; cv::threshold(smoothProbMap,maskedProbMap,threshold,255,cv::THRESH_BINARY);

This gives a matrix containing blobs in the region corresponding to the keypoint as shown below.


Multi-Person Pose Estimation in OpenCV using OpenPose
Figure 5 : Confidence Map after applying threshold

In order to find the exact location of the keypoints, we need to find the maxima for each blob. We do the following :

First find all the contours of the region corresponding to the keypoints. Create a mask for this region. Extract the probMap for this region by multiplying the probMap with this mask. Find the local maxima for this region. This is done for each contour ( keypoint region ). Python #find the blobs _, contours, _ = cv2.findContours(mapMask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) #for each blob find the maxima for cnt in contours: blobMask = np.zeros(mapMask.shape) blobMask = cv2.fillConvexPoly(blobMask, cnt, 1) maskedProbMap = mapSmooth * blobMask _, maxVal, _, maxLoc = cv2.minMaxLoc(maskedProbMap) keypoints.append(maxLoc + (probMap[maxLoc[1], maxLoc[0]],)) C++ std::vector<std::vector<cv::Point> > contours; cv::findContours(maskedProbMap,contours,cv::RETR_TREE,cv::CHAIN_APPROX_SIMPLE); for(int i = 0; i < contours.size();++i){ cv::Mat blobMask = cv::Mat::zeros(smoothProbMap.rows,smoothProbMap.cols,smoothProbMap.type()); cv::fillConvexPoly(blobMask,contours[i],cv::Scalar(1)); double maxVal; cv::Point maxLoc; cv::minMaxLoc(smoothProbMap.mul(blobMask),0,&maxVal,0,&maxLoc); keyPoints.push_back(KeyPoint(maxLoc, probMap.at<float>(maxLoc.y,maxLoc.x)));

We save the x, y coordinates and the probability score for each keypoint. We also assign an ID to each key point that we have found. This will be used later while joining the parts or connections between keypoint pairs.

Given below are the detected keypoints for the input image. You can see that it does a nice job even for partly visible person and even for person facing away from the camera.


Multi-Person Pose Estimation in OpenCV using OpenPose
Figure 6 : Detected Keypoints overlayed on the input image

Also, the keypoints without overlaying it on the input image is shown below.


Multi-Person Pose Estimation in OpenCV using OpenPose
Figure 7 : Detected points overlayed on a black background.

From the first image, you can see that we have found all the keypoints. But when the keypoints are not overlayed on the image (Figure 7), we cannot tell which part belongs to which person. We have to robustly map each keypoint to a person. This part is not trivial and can result in a lot of errors if not done correctly. For this, we will find the valid connections ( or valid pairs ) between the keypoints and then assemble these connections to create skeletons for each person.

5. Step 3 : Find Valid Pairs

A valid pair is a body part joining two keypoints, belonging to the same person. One simple way of finding the valid pairs would be to find the minimum distance between one joint and all possible other joints. For example, in the figure given below, we can find the distance between the marked Nose and all other Necks. The minimum distance pair should be the one corresponding to the same person.


Multi-Person Pose Estimation in OpenCV using OpenPose
Figure 8 : Getting the connection between keypoints by using a simple distance measure.

This approach might not work for all pairs; specially, when the image contains too many people or there is occlusion of parts. For example, for the pair, Left-Elbow -> Left Wrist The wrist of the 3rd person is closer to the elbow of the 2nd person as compared to his own wrist. Thus, it will not result in a valid pair.


Multi-Person Pose Estimation in OpenCV using OpenPose
Figure 9 : Only using distance between keypoints might fail in some cases.

This is where the Part Affinity Maps come into play. They give the direction along with the affinity between two joint pairs. So, the pair should not only have minimum distance, but their direction should also comply with the PAF Heatmaps direction.

Given below is the Heatmap for the Left-Elbow -> Left-Wrist connection.


Multi-Person Pose Estimation in OpenCV using OpenPose
Figure 10 : Showing Part Affinity Heatmaps for the Left-Elbow -> Left-Wrist pair.

Thus, in the above case, even though the distance measure wrongly identifies the pair, OpenPose gives correct result since the PAF will comply only with the unit vector joining Elbow and Wrist of the 2nd person.

The approach taken in the paper is as follows : Divide the line joining the two points comprising the pair. Find “n” points on this line. Check if the PAF on these points have the same direction as that of the line joining the points for this pair. If the direction matches to a certain extent, then it is valid pair.

Let’s see how it is done in code; The code snippets belong to the getValidPairs() function in code provided.

For each body part pair, we do the following :

Take the keypoints belonging to a pair. Put them in separate lists (candA and candB). Each point from candA will be connected to some point in candB. The figure given below shows the points in candA and candB for the pair Neck -> Right-Shoulder.
Multi-Person Pose Estimation in OpenCV using OpenPose
Figure 11 : Showing the candidates for matching for the pair Neck -> Nose. Python pafA = output[0, mapIdx[k][0], :, :] pafB = output[0, mapIdx[k][1], :, :] pafA = cv2.resize(pafA, (frameWidth, frameHeight)) pafB = cv2.resize(pafB, (frameWidth, frameHeight)) # Find the keypoints for the first and second limb candA = detected_keypoints[POSE_PAIRS[k][0]] candB = detected_keypoints[POSE_PAIRS[k][1]] C++ //A->B constitute a limb cv::Mat pafA = netOutputParts[mapIdx[k].first]; cv::Mat pafB = netOutputParts[mapIdx[k].second]; //Find the keypoints for the first and second limb const std::vector<KeyPoint>& candA = detectedKeypoints[posePairs[k].first]; const std::vector<KeyPoint>& candB = detectedKeypoints[posePairs[k].second]; Find the unit vector joining the two points in consideration. This gives the direction of the line joining them. Python d_ij = np.subtract(candB[j][:2], candA[i][:2]) norm = np.linalg.norm(d_ij) if norm: d_ij = d_ij / norm C++ std::pair<float,float> distance(candB[j].point.x - candA[i].point.x,candB[j].point.y - candA[i].point.y); float norm = std::sqrt(distance.first*distance.first + distance.second*distance.second); if(!norm){ continue; } distance.first /= norm; distance.second /= norm; Create an array of 10 interpolated points on the line joining the two points. Python # Find p(u) interp_coord = list(zip(np.linspace(candA[i][0], candB[j][0], num=n_interp_samples), np.linspace(candA[i][1], candB[j][1], num=n_interp_samples))) # Find L(p(u)) paf_interp = [] for k in range(len(interp_coord)): paf_interp.append([pafA[int(round(interp_coord[k][1])), int(round(interp_coord[k][0]))], pafB[int(round(interp_coord[k][1])), int(round(interp_coord[k][0]))] ]) C++ //Find p(u) std::vector<cv::Point> interpCoords; populateInterpPoints(candA[i].point,candB[j].point,nInterpSamples,interpCoords); //Find L(p(u)) std::vector<std::pair<float,float>> pafInterp; for(int l = 0; l < interpCoords.size();++l){ pafInterp.push_back( std::pair<float,float>( pafA.at<float>(interpCoords[l].y,interpCoords[l].x), pafB.at<float>(interpCoords[l].y,interpCoords[l].x) )); } Take the dot product between the PAF on these points and the unit vector d_ij Python # Find E paf_scores = np.dot(paf_interp, d_ij) avg_paf_score = sum(paf_scores)/len(paf_scores) C++ std::vector<float> pafScores; float sumOfPafScores = 0; int numOverTh = 0; for(int l = 0; l< pafInterp.size();++l){ float score = pafInterp[l].first*distance.first + pafInterp[l].second*distance.second; sumOfPafScores += score; if(score > pafScoreTh){ ++numOverTh; } pafScores.push_back(score); } float avgPafScore = sumOfPafScores/((float)pafInterp.size()); Term the pair as valid if 70% of the points satisfy the criteria. Python # Check if the connection is valid # If the fraction of interpolated vectors aligned with PAF is higher then threshold -> Valid Pair if ( len(np.where(paf_scores > paf_score_th)[0]) / n_interp_samples ) > conf_th : if avg_paf_score > maxScore: max_j = j maxScore = avg_paf_score C++ if(((float)numOverTh)/((float)nInterpSamples) > confTh){ if(avgPafScore > maxScore){ maxJ = j; maxScore = avgPafScore; found = true; } } 6. Step 4 : Assemble Person-wise Keypoints

Now that we have joined all the keypoints into pairs, we can assemble the pairs that share the same part detection candidates into full-body poses of multiple people.

Let us see how it is done in code; The code snippets in this section belong to the getPersonwiseKeypoints() function in the provided code

We first create empty lists to store the keypoints for each person. Then we go over each pair, check if partA of the pair is already present in any of the lists. If it is present, then it means that the keypoint belongs to this list and partB of this pair should also belong to this person. Thus, add partB of this pair to the list where partA was found. Python for j in range(len(personwiseKeypoints)): if personwiseKeypoints[j][indexA] == partAs[i]: person_idx = j found = 1 break if found: personwiseKeypoints[person_idx][indexB] = partBs[i] C++ for(int j = 0; !found && j < personwiseKeypoints.size();++j){ if(indexA < personwiseKeypoints[j].size() && personwiseKeypoints[j][indexA] == localValidPairs[i].aId){ personIdx = j; found = true; } }/* j */ if(found){ personwiseKeypoints[personIdx].at(indexB) = localValidPairs[i].bId; } If partA is not present in any of the lists, then it means that the pair belongs to a new person not in the list and thus, a new list is created. Python # if find no partA in the subset, create a new subset elif not found and k < 17: row = -1 * np.ones(19) row[indexA] = partAs[i] row[indexB] = partBs[i] C++ else if(k < 17){ std::vector<int> lpkp(std::vector<int>(18,-1)); lpkp.at(indexA) = localValidPairs[i].aId; lpkp.at(indexB) = localValidPairs[i].bId; personwiseKeypoints.push_back(lpkp); } 7. Results

We go over each each person and plot the skeleton on the input image

Python for i in range(17): for n in range(len(personwiseKeypoints)): index = personwiseKeypoints[n][np.array(POSE_PAIRS[i])] if -1 in index: continue B = np.int32(keypoints_list[index.astype(int), 0]) A = np.int32(keypoints_list[index.astype(int), 1]) cv2.line(frameClone, (B[0], A[0]), (B[1], A[1]), colors[i], 2, cv2.LINE_AA) cv2.imshow("Detected Pose" , frameClone) cv2.waitKey(0) C++ for(int i = 0; i< nPoints-1;++i){ for(int n = 0; n < personwiseKeypoints.size();++n){ const std::pair<int,int>& posePair = posePairs[i]; int indexA = personwiseKeypoints[n][posePair.first]; int indexB = personwiseKeypoints[n][posePair.second]; if(indexA == -1 || indexB == -1){ continue; } const KeyPoint& kpA = keyPointsList[indexA]; const KeyPoint& kpB = keyPointsList[indexB]; cv::line(outputFrame,kpA.point,kpB.point,colors[i],2,cv::LINE_AA); } } cv::imshow("Detected Pose",outputFrame); cv::waitKey(0);

The figure below shows the skeletons for each of the detected persons!


Multi-Person Pose Estimation in OpenCV using OpenPose

Do check out the code provided with the post!

Subscribe & Download Code

If you liked this article and would like to download code (C++ and Python) and example images used in this post, please subscribe to our newsletter. You will also receive a free Computer Vision Resource Guide. In our newsletter, we share OpenCV tutorials and examples written in C++/Python, and Computer Vision and Machine Learning algorithms and news.

Subscribe Now

References

[Video used for demo] [OpenPose Paper] [OpenPose reimplementation in Keras]
          GIS Specialist - West, Inc. - Cheyenne, WY      Cache   Translate Page      
(Python, R, SQL, etc.). Working knowledge of SQL queries. Western EcoSystems Technology, Inc....
From West, Inc. - Sat, 18 Aug 2018 10:30:53 GMT - View all Cheyenne, WY jobs
          Scrape a media platform with python for me      Cache   Translate Page      
My country has a free state owned TV media library that mirrors whatever is shown on TV for a week or two. Unfortunately they delete the stuff after those ~2 weeks and I often miss out on shows that I would've loved to watch at a later time... (Budget: $30 - $250 USD, Jobs: Python, Web Scraping)
          Melbourne Deepcast x Berlin w/ DJ Python, Mosam Howieson + more      Cache   Translate Page      
Dance & Night Clubresistance@ohmberlin.comGet Directions - 13.09.2018. Melbourne Deepcast x Berlin w/ DJ Python, Mosam Howieson + more. Wir zeigen dir auf www.wgha.de wohin man abends ausgehen kann.
          Natural Language Processing with Python from Zero to Hero      Cache   Translate Page      


Natural Language Processing with Python: from zero to hero
MP4 | Video: h264, 1280x720 | Audio: AAC, 44.1 KHz, 2 Ch
Genre: eLearning | Language: English | D...
          Scrape a media platform with python for me      Cache   Translate Page      
My country has a free state owned TV media library that mirrors whatever is shown on TV for a week or two. Unfortunately they delete the stuff after those ~2 weeks and I often miss out on shows that I would've loved to watch at a later time... (Budget: $30 - $250 USD, Jobs: Python, Web Scraping)
          Data Engineer      Cache   Translate Page      
Modis - London - An Data Engineer is needed for a well-known Media / Tech company based in London. You will support the on-going cloud migration project... for building data pipelines all coded in Python. This role sits in a brand new team which they are looking to hire 4 Data Engineers du......
          Azure Data Analytics Specialist / London / Contract      Cache   Translate Page      
International Business Solutions Consult - London - Azure/ Azure Analytics / Azure cloud / Data factory / Azure Data Factory / Python / Powershell/ Visual Studio Azure Data Analytics... Strong working experience in Implementation of Azure Analytics cloud components Strong working knowledge on Azure Data Factory V1 & V2,Pipel......
          Data Engineer - Contract - A.I / Finance      Cache   Translate Page      
Oliver Bernard - London - Job Description Data Engineer - Python, AWS, Spark, NumPy - London Company Data Engineer wanted! This company helps other companies... use data to drive decisions. They combine business experience, expertise in large-scale data analysis and visualisation, and advanced software...
          Python - Ace Technologies - Raritan, NJ      Cache   Translate Page      
Ability to interface with internal business partners to document and understand existing work processes. Understanding of machine learning, data munging and ETL...
From Ace Technologies - Mon, 03 Sep 2018 07:42:30 GMT - View all Raritan, NJ jobs
          さらなる高みへ!キラリ輝くハイスペックエンジニアへ! by 株式会社グッドワークス      Cache   Translate Page      
エンジニアたちを引っ張るそんなリーダーに! 100名近いエンジニア部隊をあなたにいずれまとめていただきたい! 役職を目指す我こそはという方ご応募ください。 以下が詳細となります。 募集している職種: 技術者派遣事業におけるエンジニアを募集します。 弊社の正社員雇用となります。 具体的な仕事としては、システム開発業務全般、インフラ関連業務全般を担当して頂きます。技術者派遣事業なので、クライアント先に常駐して頂くことになります。 仕事の醍醐味: 一つの案件に限らず、色んな案件にチャレンジができるのがまず大きな魅力です。さらに「自己成長」を実現できる制度も多数あります。「仕事を通じて成長したい」それは誰しもが想うことだと思います。弊社の仕事はルーチンワークではなく、自ら考え、行動して、そして新たなチャレンジをしていきます。だからこそ成長が見込める仕事を、最優先にアサインしております。 求める人物像: ・インフラもしくはシステム開発でエンジニア経験のある方 ・新しいことへチャレンジしたい方 ・チームでのコミュニケーションができる方 ・長期的にキャリアプランを描ける会社を探している方 開発環境:: C,C+,C++,C#,Java,covol,HTML,CSS,JS,Python,VBA,Windows,Linuxなど 今後のキャリア: 将来的には主任や課長を担当して頂きたいと考えております。さらには現場での開発経験を活かしてプログラミングスクール事業の講師を担当するなど、別事業に回っていただく機会もあります。 是非お気軽に「話を聞きにいきたい」ボタンよりご連絡お待ちしております。
          即戦力より成長力を重視します!プログラマーになりたい大学生積極採用中! by 株式会社CoinOtaku      Cache   Translate Page      
コインオタクでは、仮想通貨に関する定量データを収集・解析し、仮想通貨毎に評価付け・価格予測をするサービスを提供しております。 機械学習による市場予測や通貨の将来性に対する評価は、ゴールドマンサックスなどの証券会社や銀行、Googleなどのテックジャイアントが精力的に開発を進めている分野であり、その開発を仮想通貨領域において行なっているのが弊社です。 現状の成果としては、数百種の仮想通貨に対して、 開発力、 性能、 市場からの評価、 コミュニティの活発さ、 マイニングの現状 将来的な実需、 などの観点から分析を行い、偏差値という形で評価を出し、有料コンテンツとして提供しております。 また、10種以上の取引所から価格情報を取得し機械学習させることでバックテストで月数10パーセントのリターンを出すアルゴリズムの作成をしております。 仮想通貨に対して正当な評価を下すことが極めて難しい市場の中で、 ビッグデータを用いて仮想通貨市場を解明したい方、そしてそれを多くのユーザーに届けたい方、ぜひご応募ください! ▼業務で使用する技術・ツール ・基礎技術  ・機械学習  ・ウェブ ・使用言語  ・Python3  ・Javascript ・ライブラリ  ・Flask  ・Vue  ・Nuxt  ・Express ・インフラ  ・EC2  ・S3  ・RDS  ・Redis  ・MySQL ▼必須項目 ・Python/Javascriptいずれかでの開発経験 ▼推奨項目 ・AWSなどのインフラに関する知識 ・機械学習・統計に関する知識 ▼求める人物 ・仮想通貨に対する興味がある方 ・大胆にチャレンジし、多くの失敗から学べる方 ・チームのために、自ら考え、自ら動き、率先して成功のために行動できる方 ・オーナーシップを持って業務に励み、ベストを尽くすための努力を惜しまない方 ぜひ一緒に仮想通貨市場を解明し、市場に対して説得力のある答えを提供することで市場の発展を支えましょう!ご応募をお待ちしています! ーーーーーーーーーーーーーーーーーーーーーー コインオタクのお仕事に少しでも興味を持たれた方へ ーーーーーーーーーーーーーーーーーーーーーー コインオタクは、本気で【世界一の仮想通貨情報サービス】を目指しています。 この目的を共有し、それを本気で実現しようとしている学生たちで構成されています。 そして、結果も着実に出ています。 他の学生インターンとは異なり裁量権の重さ、達成感、充足感は段違いのものになっています。 少しでもこの目的を実現したい、その過程で様々なスキルセットを学び成長したいと思う部分があれば、ぜひ「話を聞きたい」ボタンを押してみてください。 長くなりましたが、ここまで読んでいただきありがとうございました。 メンバー一同、あなたからのご応募を心よりお待ちしています。
          まだアルバイトで消耗してるの?学生企業のインターンで市場価値を高めろ! by 株式会社CoinOtaku      Cache   Translate Page      
コインオタクでは、仮想通貨に関する定量データを収集・解析し、仮想通貨毎に評価付け・価格予測をするサービスを提供しております。 機械学習による市場予測や通貨の将来性に対する評価は、ゴールドマンサックスなどの証券会社や銀行、Googleなどのテックジャイアントが精力的に開発を進めている分野であり、その開発を仮想通貨領域において行なっているのが弊社です。 現状の成果としては、数百種の仮想通貨に対して、 開発力、 性能、 市場からの評価、 コミュニティの活発さ、 マイニングの現状 将来的な実需、 などの観点から分析を行い、偏差値という形で評価を出し、有料コンテンツとして提供しております。 また、10種以上の取引所から価格情報を取得し機械学習させることでバックテストで月数10パーセントのリターンを出すアルゴリズムの作成をしております。 仮想通貨に対して正当な評価を下すことが極めて難しい市場の中で、 ビッグデータを用いて仮想通貨市場を解明したい方、そしてそれを多くのユーザーに届けたい方、ぜひご応募ください! ▼業務で使用する技術・ツール ・基礎技術  ・機械学習  ・ウェブ ・使用言語  ・Python3  ・Javascript ・ライブラリ  ・Flask  ・Vue  ・Nuxt  ・Express ・インフラ  ・EC2  ・S3  ・RDS  ・Redis  ・MySQL ▼必須項目 ・Python/Javascriptいずれかでの開発経験 ▼推奨項目 ・AWSなどのインフラに関する知識 ・機械学習・統計に関する知識 ▼求める人物 ・仮想通貨に対する興味がある方 ・大胆にチャレンジし、多くの失敗から学べる方 ・チームのために、自ら考え、自ら動き、率先して成功のために行動できる方 ・オーナーシップを持って業務に励み、ベストを尽くすための努力を惜しまない方 ぜひ一緒に仮想通貨市場を解明し、市場に対して説得力のある答えを提供することで市場の発展を支えましょう!ご応募をお待ちしています! ーーーーーーーーーーーーーーーーーーーーーー コインオタクのお仕事に少しでも興味を持たれた方へ ーーーーーーーーーーーーーーーーーーーーーー コインオタクは、本気で【世界一の仮想通貨情報サービス】を目指しています。 この目的を共有し、それを本気で実現しようとしている学生たちで構成されています。 そして、結果も着実に出ています。 他の学生インターンとは異なり裁量権の重さ、達成感、充足感は段違いのものになっています。 少しでもこの目的を実現したい、その過程で様々なスキルセットを学び成長したいと思う部分があれば、ぜひ「話を聞きたい」ボタンを押してみてください。 長くなりましたが、ここまで読んでいただきありがとうございました。 メンバー一同、あなたからのご応募を心よりお待ちしています。
          confuse-a-cat labor-intensive monty python version      Cache   Translate Page      
confuse-a-cat labor-intensive monty python version
but wait....
by possum

          Python Application and Platform Developer - Plotly - Montréal, QC      Cache   Translate Page      
Engage our Open Source communities and help take stewardship of our OSS projects. Plotly is hiring Pythonistas!...
From Plotly - Thu, 06 Sep 2018 15:03:02 GMT - View all Montréal, QC jobs
          Python Spark 链接 MongoDB      Cache   Translate Page      
none
          DevOps Engineer - AWS, Python, Node.js, JS - Relo Offered      Cache   Translate Page      
IL-Lake Forest, If you are a DevOps Engineer with experience, please read on! Based in beautiful Lake Forest, Illinois we are a well-established, enterprise type of company that specializes in health and hospitality. Due to growth and demand for our services, we are in need of hiring for a DevOps Engineer that possesses strong experience with AWS, CI/CD Pipelines, and python/node or JS/node. If you are interested
          Python Engineer      Cache   Translate Page      
NY-New York City, We are a growing Analytics Company looking to hire an experienced Python Engineer. What You Will Be Doing - Meeting customer requests and software SLAs using your experience in python. - Architect and execute end-to-end data pipelines which may include: accessing remote sites, API integrations, query design/optimization - Navigate and permission Windows and Linux environments in support of our pro
          Data Engineer - Python      Cache   Translate Page      
NY-New York City, We are a growing Analytics Company looking to hire an experienced Data Engineer with Python experience. What You Will Be Doing - Meeting customer requests and software SLAs using your experience in python. - Architect and execute end-to-end data pipelines which may include: accessing remote sites, API integrations, query design/optimization - Navigate and permission Windows and Linux environments
          Telecommute AWS Application Expert Developer      Cache   Translate Page      
An internet company is in need of a Telecommute AWS Application Expert Developer. Individual must be able to fulfill the following responsibilities: Build highly available and scalable cloud applications on AWS Develop and architect systems at scale Make business decisions and recommendations on the best technology Required Skills: Demonstrated ability to build high performance multi-platform applications and robust APIs Experience working in teams with a DevOps culture Knowledge of AWS, GCP or similar cloud platforms Knowledge of gRPC, Docker, Kubernetes, Linkerd, NodeJs, and/or Cassandra, etc Knowledge of REST APIs in Ruby, Java, Scala, Python, PHP, and/or C#, etc Knowledge of PostgreSQL, MySQL, SQL Server, or another RDBMS
          (USA-PA-Philadelphia) HIA Data & Analytics - Release Delivery Engineer, Senior Associate      Cache   Translate Page      
A career in our Business Intelligence practice, within Data and Analytics Technology services, will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. You’ll make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. **Responsibilities** As a Senior Associate, you’ll work as part of a team of problem solvers with extensive consulting and industry experience, helping our clients solve their complex business issues from strategy to execution. Specific responsibilities include but are not limited to: + Proactively assist in the management of several clients, while reporting to Managers and above + Train and lead staff + Establish effective working relationships directly with clients + Contribute to the development of your own and team’s technical acumen + Keep up to date with local and national business and economic issues + Be actively involved in business development activities to help identify and research opportunities on new/existing clients + Continue to develop internal relationships and your PwC brand **Preferred skills** + Code branching best practices + Experience with git source control + Understand Release Performance + AWS CloudWatch + AWS CLI & Console experience + Deployment Automation & Orchestration + Jenkins Programming Skills + Understand the implemented security controls and governance processes. Validate compliance and evolve the controls/processes as the product evolves. + Understand AWS monitoring, metrics, and logging and update as the product evolves + Ensure that the AWS environment continues to be highly available, scalable, and self-healing + Understand and maintain CI/CD pipelines with AWS CodePipeline and AWS CodeBuild + Understand and manage AWS CloudFormation templates - requires experience with CloudFormation or similar infrastructure as code platform + Artifacts Management experience + Strong Scripting experience - Bash, Python + Source Code Management + Strong knowledge of software build cycles **Recommended Certs/Training** + AWS Certified SysOps Administrator + AWS Certified DevOps Engineer **Minimum years experience required** 3 to 4 years Client Facing Consulting of technical implementation and delivery management experience. **Additional experience preferred** Healthcare industry experience - PLS, Payer & Provider experience. Hand-on experience with at least 2-3 leading BI & Analytics tools/products; 1. BI: Cognos, OBIEE, SAP BO, SSRS, Microsoft BI, MicroStrategy, etc... 2. Data Visualization: Qlik, Tableau, Spotfire, CoreBI, etc... 3. Analytics: SAS, R, Alteryz, IBM Watson, Anzo, etc... All qualified applicants will receive consideration for employment at PwC without regard to race; creed; color; religion; national origin; sex; age; disability; sexual orientation; gender identity or expression; genetic predisposition or carrier status; veteran, marital, or citizenship status; or any other status protected by law. PwC is proud to be an affirmative action and equal opportunity employer. _For positions based in San Francisco, consideration of qualified candidates with arrest and conviction records will be in a manner consistent with the San Francisco Fair Chance Ordinance._ All qualified applicants will receive consideration for employment at PwC without regard to race; creed; color; religion; national origin; sex; age; disability; sexual orientation; gender identity or expression; genetic predisposition or carrier status; veteran, marital, or citizenship status; or any other status protected by law.
          (USA-NY-New York) Seasonal Digital Risk Solutions - Senior Associate      Cache   Translate Page      
A career in our Flexibility Talent Network practice will focus on directly supporting our client engagement teams by attracting qualified candidates for short term or defined period opportunities. This unique career option provides an alternative to year round employment for people who are looking to pursue meaningful experiences or responsibilities outside their time with PwC. **Responsibilities** As a Senior Associate, you’ll work as part of a team of problem solvers with extensive consulting and industry experience, helping our clients solve their complex business issues from strategy to execution. Specific responsibilities include but are not limited to: + Proactively assist in the management of several clients, while reporting to Managers and above + Train and lead staff + Establish effective working relationships directly with clients + Contribute to the development of your own and team’s technical acumen + Keep up to date with local and national business and economic issues + Be actively involved in business development activities to help identify and research opportunities on new/existing clients + Continue to develop internal relationships and your PwC brand A career in our Analytics Data Assurance practice, within Risk Assurance Compliance and Analytics services, will provide you with the opportunity to assist clients in developing analytics and technology solutions that help them detect, monitor, and predict risk. Using advanced technology, we re able to focus on establishing the right controls, processes and structures for our clients. **Job Requirements and Preferences** : **Basic Qualifications** : **Minimum Degree Required** : Bachelor Degree **Required Fields of Study** : Accounting, Management Information Systems, Management Information Systems & Accounting, Mathematical Statistics, Engineering, Statistics, Mathematics **Minimum Years of Experience** : 3 year(s) of consulting, data analysis, compliance, internal audit or risk experience. **Preferred Qualifications** : **Degree Preferred** : Master of Business Administration **Preferred Knowledge/Skills** : Demonstrates thorough knowledge and/or a proven track record of success with operating in a professional services firm or large enterprise as a consultant, auditor or business process specialist. Demonstrates thorough knowledge and understanding of performing on project teams and providing deliverables involving multiphase data analysis related to the evaluation of compliance, finance, and risk issues. Demonstrates thorough knowledge and understanding of evaluating business process, compliance and risk issues. Demonstrates thorough knowledge and understanding of database concepts, including building relationships between tables, grouping data, and producing cohesive analyses. Demonstrates thorough knowledge and/or a proven record of success leveraging data manipulation and analysis technologies inclusive of Microsoft SQL Server, SQL, Oracle, or DB2. Demonstrates thorough knowledge and/or a proven record of IT architecture and solution development through the use of R, Python, Java, C# or other programming languages. Demonstrates thorough knowledge and/or a proven record of success leveraging data visualization tools such as Spotfire, Qlickview Microsoft BI and Tableau. Demonstrates knowledge with identifying new service opportunities, including the following: - Identifying and addressing client needs, including developing and sustaining extensive client relationships; and, - Assisting to generate a vision, provide input on direction, coach staff, and encourage improvement and innovation. Demonstrates thorough abilities and/or a proven record of success working seamlessly in a virtual environment to complete projects with team members based in various locations, domestically, and globally. Demonstrates thorough abilities supervising staff, which includes creating a positive environment by monitoring workloads of the team, respecting the work-life quality of team members, providing feedback in a timely manner, performing a critical review of other’s work, informally coaching staff, and keeping leadership informed of progress and issues. Demonstrates thorough abilities to identify and address client needs, including developing and sustaining meaningful client relationships and understand the client's business. Demonstrates thorough project management skills in relation to data management projects, including developing project plans, budgets, and deliverables schedules. Demonstrates creative thinking, individual initiative, and flexibility in prioritizing and completing tasks. Demonstrates thorough abilities and/or a proven record of success with researching and analyzing pertinent client, industry, and technical matters. Demonstrates a desire to obtain deep industry sector(s) expertise over time. Demonstrates thorough abilities to approach clients and team members in an organized and knowledgeable manner and to deliver clear requests for information. All qualified applicants will receive consideration for employment at PwC without regard to race; creed; color; religion; national origin; sex; age; disability; sexual orientation; gender identity or expression; genetic predisposition or carrier status; veteran, marital, or citizenship status; or any other status protected by law. PwC is proud to be an affirmative action and equal opportunity employer. _For positions based in San Francisco, consideration of qualified candidates with arrest and conviction records will be in a manner consistent with the San Francisco Fair Chance Ordinance._ All qualified applicants will receive consideration for employment at PwC without regard to race; creed; color; religion; national origin; sex; age; disability; sexual orientation; gender identity or expression; genetic predisposition or carrier status; veteran, marital, or citizenship status; or any other status protected by law.
          (USA-PA-Philadelphia) HIA Data & Analytics - Data Engineer, Senior Associate      Cache   Translate Page      
A career within Data and Analytics Technology services, will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. **Responsibilities** As a Senior Associate, you’ll work as part of a team of problem solvers with extensive consulting and industry experience, helping our clients solve their complex business issues from strategy to execution. Specific responsibilities include but are not limited to: + Proactively assist in the management of several clients, while reporting to Managers and above + Train and lead staff + Establish effective working relationships directly with clients + Contribute to the development of your own and team’s technical acumen + Keep up to date with local and national business and economic issues + Be actively involved in business development activities to help identify and research opportunities on new/existing clients + Continue to develop internal relationships and your PwC brand **Preferred skills** + Strong Java Programming Skills + Strong SQL Query skills + Experience using Java with Spark + Hands-on experience with Hadoop administration and troubleshooting, including cluster configuration and scaling + Hands-on experience with Hive + AWS Experience + Hands-on AWS Console and CLI + Experience with S3, Kinesis Stream + AWS EMS + AWS Lambda + Python **Helpful to have** + AWS ElasticSearch + AWS DynamoDB **Recommended Certs/Training** + AWS Certified Big Data – Specialty **Minimum years experience required** + 3 to 4 years Client Facing Consulting of technical implementation and delivery management experience. **Additional application instructions** Healthcare industry experience - PLS, Payer & Provider experience. Hand-on experience with at least 2-3 leading enterprise data tools/products; 1. Data Integration: Informatica Power Center, IBM Data Stage, Oracle Data Integrator. 2. MDM: Reltio, Informatica MDM, Veeva Network, Reltio. 3. Data Quality: Informatica Data Quality, Trillium. 4. Big Data: Hortonworks, Cloudera, Apache Spark, Kafka, etc... 5. Data Visualization: Denodo 6. Metadata Management: Informatica Metadata Manager, Informatica Live Data Map. 7. Data Stewardship/Governance: Collibra, Elation. All qualified applicants will receive consideration for employment at PwC without regard to race; creed; color; religion; national origin; sex; age; disability; sexual orientation; gender identity or expression; genetic predisposition or carrier status; veteran, marital, or citizenship status; or any other status protected by law. PwC is proud to be an affirmative action and equal opportunity employer. _For positions based in San Francisco, consideration of qualified candidates with arrest and conviction records will be in a manner consistent with the San Francisco Fair Chance Ordinance._ All qualified applicants will receive consideration for employment at PwC without regard to race; creed; color; religion; national origin; sex; age; disability; sexual orientation; gender identity or expression; genetic predisposition or carrier status; veteran, marital, or citizenship status; or any other status protected by law.
          (USA-NY-New York) Data Science Intern      Cache   Translate Page      
*The Team:* The Data science team is a newly formed applied research team within S&P Global Ratings that will be responsible for building and executing a bold vision around using Machine Learning, Natural Language Processing, Data Science, knowledge engineering, and human computer interfaces for augmenting various business processes. *The Impact:* This role will have a significant impact on the success of our data science projects ranging from choosing which projects should be undertaken, to delivering highest quality solution, ultimately enabling our business processes and products with AI and Data Science solutions. *What s in it for you:* This is a high visibility team with an opportunity to make a very meaningful impact on the future direction of the company. You will work with other highly accomplished team members to * Implement data science algorithms * Solve business problems using data science methods * Collaborate effectively with technical and non-technical partners * Create state of the art Augmented Intelligence, Data Science and Machine Learning solutions. * Assist in tracking quantitative and qualitative metrics to measure process and - or content. Provides fact-based interpretation and analysis of findings. *Responsibilities:* As an intern you will be responsible for building AI and Data Science models. You will need to rapidly prototype various algorithmic implementations and test their efficacy using appropriate experimental design and hypothesis validation. *Basic Qualifications:* * Currently enrolled in PhD or MS in Computer Science, Computational Linguistics, Artificial Intelligence, Statistics, or related field. * Strong ability to code in Python or Java *Preferred Qualifications:* * Experience with Financial data sets, or S&P s credit ratings process is highly preferred. * Knowledge and working experience in one or more of the following areas:* Natural Language Processing, Machine Learning, Question Answering, Text Mining, Information Retrieval, Distributional Semantics, Data Science, Knowledge Engineering *To all recruitment agencies:* S&P Global does not accept unsolicited agency resumes. Please do not forward such resumes to any S&P Global employee, office location or website. S&P Global will not be responsible for any fees related to such resumes. S&P Global is an equal opportunity employer committed to making all employment decisions without regard to race - ethnicity, gender, pregnancy, gender identity or expression, color, creed, religion, national origin, age, disability, marital status (including domestic partnerships and civil unions), sexual orientation, military veteran status, unemployment status, or any other basis prohibited by federal, state or local law. Only electronic job submissions will be considered for employment. *If you need an accommodation during the application process due to a disability, please send an email to:* EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law.
          (DEU-Munich) Data Engineer – Innovation Scaling for Web & Mobile Applications      Cache   Translate Page      
*Role Title:* Data Engineer Innovation Scaling for Web & Mobile Applications - 10A *The Role:* You live to break down and solve complex problems by creating practical, maintainable, and scalable solutions. You're a great person that willingly collaborates, listens and cares about your peers. If this is you then you have the best premises to join our team. In your role as the Data Engineer you will be responsible for the end to end Data Migration development, ownership and management. Our department is mainly responsible for the transition and scaling of the prototypes, generated by the innovation department, towards a fully integrated solution which our customers can rely on. Besides that we are also responsible for the enhancements & maintenance of existing products. *Your responsibilities will include but are not limited to:* * Build the infrastructure required for optimal extraction, transformation and loading of data from a wide variety of data sources incl. using SQL, Hadoop and AWS data sources. Document & consolidate data sources if required. * Collaborate with local development & data teams and the central data management group * Identify, design and implement internal process improvements:* automating manual processes, optimizing data delivery, re-design infrastructure for greater scalability etc. * Enable cutting-edge customer solutions by retrieving and aggregating data from multiple sources and compiling it into digestible and actionable forms. * Act as a trusted technical advisor for the teams and stakeholders. * Work with managers, software developers, and scientists to design and develop data infrastructure and cutting-edge market solutions. * Create data tools for analytics and data science team members tat assist them in building and optimizing our products into innovative business leaders in their segment. * Derive Unsupervised and Supervised Insights from Data with below specializations * Provide Machine Learning competences o Working on various kind of data like Continuous Numerical, Discrete, Textual, Image, Speech, Baskets etc. o Experience in Data Visualization, Predictive Analytics, Machine Learning, Deep Learning, Optimization etc. o Derive and Drive business Metrics and Measurement Systems to enable for AI readiness. o Handle large datasets using big data technologies. *The Impact:* You have the opportunity to shape one of the oldest existing industries in one of the largest enterprises in the market. Through active participation in shaping and improving our ways to achieve technical excellence you will drive and improve our business. *The Career Opportunity:* You will be working within flat hierarchies in a young and dynamic team with flexible working hours. You will benefit from a bandwidth of career enhancing opportunities. You have very good opportunities to shape your own working environment in combination with a very good compensation as well as benefits and will experience the advantage of both a big enterprise and a small start-up at the same time. Since the team is fairly small you will benefit from high trust and responsibility given to you. Also you will be a key person to grow our team. You should also be motivated to introduce new innovative processes and tools into an existing global enterprise structure. *The Team - The Business:* We are a small, highly motivated team in a newly set up division to scale innovation. We use agile methodologies to drive performance and we share and transfer knowledge as well as embracing methods such as pairing or lightning talks to do so. We are always trying to stay ahead of things and try to be state-of-the-art and cutting-edge. *Knowledge & Skills:* * Proven experience in a data engineering, business analytics, business intelligence or comparable data engineering role, including data warehousing and business intelligence tools, techniques and technology * B.S. degree in math, statistics, computer science or equivalent technical field * Experience transforming raw data into information. Implemented data quality rules to ensure accurate, complete, timely data that is consistent across databases. * Demonstrated ability to think strategically about business, product, and technical challenges * Experience in data migrations and transformational projects * Fluent English written and verbal communication skills * Effective problem-solving and analytical capabilities * Ability to handle a high pressure environment * Programming & Tool skills, Python, Spark, Tableau, XLMiner, Linear Regression, Logistic Regression, Unsupervised Machine Learning, Supervised Machine Learning, Forecasting, Marketing, Pricing, SCM, SMAC Analytics *_Beneficial experience:* _ * Experience in NoSQL databases (e.g. Dynamo DB, Mongo DB) * Experience in RDBMS databases (e.g. Oracle DB) *_About Platts and S&P Global_* *Platts is a premier source of benchmark price assessments and commodities intelligence. At Platts, the content you generate and the relationships you build are essential to the energy, petrochemicals, metals and agricultural markets. Learn more at https:* - - www.platts.com - *S&P Global*includes Ratings, Market Intelligence, S&P Dow Jones Indices and Platts. Together, we re the foremost providers of essential intelligence for the capital and commodities markets. - S&P Global is an equal opportunity employer committed to making all employment decisions without regard to race - ethnicity, gender, pregnancy, gender identity or expression, colour, creed, religion, national origin, age, disability, marital status (including domestic partnerships and civil unions), sexual orientation, military veteran status, unemployment status, or other legally protected categories, subject to applicable law. - *To all recruitment agencies:* S&P Global does not accept unsolicited agency resumes. Please do not forward such resumes to any S&P Global employee, office location or website. S&P Global will not be responsible for any fees related such resumes.
          (USA-NY-New York) Data Scientist      Cache   Translate Page      
*The Team:* The Data science team is a newly formed applied research team within S&P Global Ratings that will be responsible for building and executing a bold vision around using Machine Learning, Natural Language Processing, Data Science, knowledge engineering, and human computer interfaces for augmenting various business processes. *The Impact:* This role will have a significant impact on the success of our data science projects ranging from choosing which projects should be undertaken, to delivering highest quality solution, ultimately enabling our business processes and products with AI and Data Science solutions. *What s in it for you:* This is a high visibility team with an opportunity to make a very meaningful impact on the future direction of the company. You will work with senior leaders in the organization to help define, build, and transform our business. You will work closely with other senior scientists to create state of the art Augmented Intelligence, Data Science and Machine Learning solutions. *Responsibilities:* As a Data Scientist you will be responsible for building AI and Data Science models. You will need to rapidly prototype various algorithmic implementations and test their efficacy using appropriate experimental design and hypothesis validation. *Basic Qualifications:* BS in Computer Science, Computational Linguistics, Artificial Intelligence, Statistics, or related field with 5 years of relevant industry experience. *Preferred Qualifications:* * MS in Computer Science, Statistics, Computational Linguistics, Artificial Intelligence or related field with 3 years of relevant industry experience. * Experience with Financial data sets, or S&P s credit ratings process is highly preferred. * Knowledge and working experience in one or more of the following areas:* Natural Language Processing, Machine Learning, Question Answering, Text Mining, Information Retrieval, Distributional Semantics, Data Science, Knowledge Engineering * Proficient programming skills in a high-level language (e.g. Java, Scala, Python, C - C , Perl, Matlab, R) * Experience with statistical data analysis, experimental design, and hypotheses validation * Project-based experience with some of the following tools:* * Applied machine learning (e.g. libSVM, Shogun, Scikit-learn or similar) * Natural Language Processing (e.g., ClearTK, ScalaNLP - Breeze, ClearNLP, OpenNLP, NLTK, or similar) * Statistical data analysis and experimental design (e.g., using R, Matlab, iPython, etc.) * Information retrieval and search engines, e.g. Solr - Lucene * Distributed computing platforms, such as Hadoop (Hive, HBase, Pig), Spark, GraphLab * Databases (traditional and noSQL) *At S&P Global, we don t give you intelligencewe give you essential intelligence. The essential intelligence you need to make decisions with conviction. We re the world s foremost provider of ratings, benchmarks and analytics in the global capital and commodity markets. Our divisions include:* * S&P Global Ratings, which provides credit ratings, research and insights essential to driving growth and transparency. * S&P Global Market Intelligence, which provides insights into companies, markets and data so that business and financial decisions can be made with conviction. * S&P Dow Jones Indices, the world s largest resource for iconic and innovative indices, which helps investors pinpoint global opportunities. * S&P Global Platts, which equips customers to identify and seize opportunities in energy and commodities, stimulating business growth and market transparency. *To all recruitment agencies:* S&P Global does not accept unsolicited agency resumes. Please do not forward such resumes to any S&P Global employee, office location or website. S&P Global will not be responsible for any fees related to such resumes. S&P Global is an equal opportunity employer committed to making all employment decisions without regard to race - ethnicity, gender, pregnancy, gender identity or expression, color, creed, religion, national origin, age, disability, marital status (including domestic partnerships and civil unions), sexual orientation, military veteran status, unemployment status, or any other basis prohibited by federal, state or local law. Only electronic job submissions will be considered for employment. *If you need an accommodation during the application process due to a disability, please send an email to:* EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law.
          (USA-CO-Centennial) Senior Cloud Engineer      Cache   Translate Page      
*TheRole:* Looking for a Senior Cloud Engineer to help build, maintain, and troubleshoot our rapidly expanding SPGlobal cloud computing infrastructure. *The Location:* Primary:* Colorado, US Other Location:* NJ & NY, US *The Team:* Solutions Engineering Team is responsible delivering infrastructure architecture, implementation & deployments on S&P Global MI Cloud & Data Centers in collaboration with S&P Global Digital infrastructure & other Teams from a wide variety of backgrounds. The key area of focus:* Infrastructure as Code, Application Architecture, Automation, Project Delivery, Environment Troubleshooting, and Low Level Design Documentation. *The Impact:* You will utilize your technical knowledge and analytical skills in architecting and optimizing cloud infrastructure, standardizing technology stack for cloud & datacenters, implementing cloud service catalog and service governance solutions, and driving implementation of our cloud first strategy and cloud adoption in partnership with development & business stakeholders. *What s in it for you:* You will be part of a talented team of engineers in our Solutions Engineering team that demonstrate superb technical competency, delivering mission critical infrastructure and ensuring the highest levels of availability, performance and security. The Cloud Engineer will be responsible for supporting a large scale deployment and all underlying systems related to supporting cloud computing workloads and IT project deployment. *Responsibilities:* * Develop and leverage AWS tools and services to manage and automate key operations capabilities. This includes AWS Systems Manager, Patch Manager, Cloud Formation and custom scripting to extend the AWS services. * Proactively ensure the highest levels of systems and infrastructure availability * Monitor and test application performance for potential bottlenecks, identify possible solutions, and work with developers to implement those fixes. * Maintain security, backup, and redundancy strategies * Write and maintain custom scripts to increase system efficiency and lower the human intervention time on any tasks. * Participate in the design of information and operational support systems * Provide 2nd and 3rd level support for AWS infrastructure. * Liaise with vendors and other IT personnel for problem resolution *What We re Looking For:* * Looking for 3 to 4 years of experience implementing AWS Cloud infrastructure solutions. * Experience with AWS hosting technologies:* VPC, EC2, ELB, RDS, Lambda, SES, SNS, Containers, API Gateway,etc. * Good conceptual understanding & knowledge on virtualization & container based technologies such as AWS, Docker, VMware, and Virtual Box. * 3 to 4 years of solid scripting skills on any 2 of these following power shell, clould formation (must), shell scripts, Perl, Python, Javascripts. * Experience with monitoring solutions, such as:* AWS CloudWatch, Nagios, ELK, etc. * Experience working with large scale IT projects related to design, deployment and configuration. * Strong critical thinking and problem solving skills. * Ability to work individually without much direction while also working as part of a team towards a common goal. * Excellent written and oral communication skills. * Certifications:* * Must have at least one of the following active certs upon application:* AWS. This is a requirement. *Basic Qualifications:* Bachelor's - Master s Degree in Computer Science, Information Systems or equivalent. *Preferred Qualifications:* * AWS professionals with development or architecture background. * Proficient with software development lifecycle (SDLC) methodologies like Agile, Test- driven development. * Good experience in delivering infrastructure solutions using AWS services.
          (USA-NY-New York) Engineering Director, Data Science      Cache   Translate Page      
*The Team:* The Data science team is a newly formed applied research team within S&P Global Ratings that will be responsible for building and executing a bold vision around using Machine Learning, Natural Language Processing, Data Science, knowledge engineering, and human computer interfaces for augmenting various business processes. *The Impact:* This role will have a significant impact on the success of our data science projects ranging from choosing which projects should be undertaken, to delivering highest quality solution, ultimately enabling our business processes and products with AI and Data Science solutions. *What s in it for you:* This is a high visibility leadership role with an opportunity to make meaningful impact on the future direction of the company. You will define new op