Next Page: 10000

          An Efficient Method for Cloning Gastrointestinal Stem Cells from Patients via Endoscopic Biopsies.      Cache   Translate Page      
Icon for Elsevier Science Related Articles

An Efficient Method for Cloning Gastrointestinal Stem Cells from Patients via Endoscopic Biopsies.

Gastroenterology. 2018 Oct 05;:

Authors: Duleba M, Qi Y, Mahalingam R, Flynn K, Rinaldi F, Liew AA, Neupane R, Vincent M, Crum CP, Ho KY, Hou JK, Hyams JS, Sylvester FA, McKeon F, Xian W

PMID: 30296437 [PubMed - as supplied by publisher]


          Key disease-resistant gene spotted in wheat      Cache   Translate Page      

Key disease-resistant gene spotted in wheat

DF Report

A team of researchers have identified Yr15, a broad-spectrum disease-resistant gene derived from wild emmer wheat, the ancestor of durum (pasta) wheat, and analysed it.

Identification of the Yr15 gene facilitates the invention of a durable solution for controlling yellow rust, a major threat to food security for millions of people, believes the international group of researchers led by Natural Resources Institute Finland (Luke), the Institute of Biotechnology at the University of Helsinki, and the University of Haifa.

Yellow rust is a devastating fungal disease threatening much of global wheat production. The problem has been increasing with climate change, said a press release issued by Luke.

Wheat is the most cultivated food crop globally, but more than five million tons of wheat harvest valued at around one billion USD are estimated to be lost annually to yellow rust, affecting food security and affordability for millions of people.

The international collaborative study involving about 30 researchers at 14 institutions in eight countries unlocks interesting opportunities for breeding more disease-resistant wheat varieties.

“Aaron Aaronsohn discovered wild wheat in 1906, and believed it would hold the key to breeding disease- and stress-resistant crops. With the work on Yr15, Aaronsohn’s vision is bearing fruit,” said Prof Alan Schulman of Luke.

“Crop’s wild relatives, like wild emmer wheat, are a great reservoir of useful genes and need conservation. Combined with a genome sequence – which became available this year – rapid advances in breeding are now possible,” said Schulman.

Although new disease-resistant genes are frequently discovered from various sources, fungi rapidly evolve to overcome them, rendering the majority of the world wheat production susceptible to epidemics. However, Yr15 has been found to be remarkably stable over several decades.

“We now understand why Yr15 is so robust – its structure is highly unusual for a disease-resistance gene. The project showed, though, that related genes are present in many plants, opening the door to widespread use,” Schulman added.

The results of the study ‘Cloning of the wheat Yr15 resistance gene’ sheds light on the plant tandem kinase-pseudokinase family have been published in the September 2018 issue of Nature Communications.

 

© DAILY FINLAND Developed by : orangebd
#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          Generic Docker Container Image for running and live reloading a Node application based on a ...      Cache   Translate Page      

Originally published at technology.amis.nl

My desire: find a way to run a Node application from a Git(Hub) repository using a generic Docker container and be able to refresh the running container on the fly whenever the sources in the repo are updated. The process of producing containers for each application and upon each change of the application is too cumbersome and time consuming for certain situations — including rapid development/test cycles and live demonstrations. I am looking for a convenient way to run a Node application anywhere I can run a Docker container — without having to build and push a container image — and to continuously update the running application in mere seconds rather than minutes. This article describes what I created to address that requirement.

Key ingredient in the story: nodemon — a tool that monitors a file system for any changes in a node.js application and automatically restarts the server when there are such changes. What I had to put together:

a generic Docker container based on the official Node image — with npm and a git client inside

  • adding nodemon (to monitor the application sources)
  • adding a background Node application that can refresh from the Git repository — upon an explicit request, based on a job schedule and triggered by a Git webhook
  • defining an environment variable GITHUB_URL for the url of the source Git repository for the Node application
  • adding a startup script that runs when the container is ran first (clone from Git repo specified through GITHUB_URL and run application with nodemon) or restarted (just run application with nodemon)

I have been struggling a little bit with the Docker syntax and operations (CMD vs RUN vs ENTRYPOINT) and the Linux bash shell scripts — and I am sure my result can be improved upon.

The Dockerfile that builds the Docker container with all generic elements looks like this:

FROM node:8 #copy the Node Reload server - exposed at port 4500 COPY package.json /tmp COPY server.js /tmp RUN cd tmp && npm install EXPOSE 4500 RUN npm install -g nodemon COPY startUpScript.sh /tmp COPY gitRefresh.sh /tmp CMD ["chmod", "+x", "/tmp/startUpScript.sh"] CMD ["chmod", "+x", "/tmp/gitRefresh.sh"] ENTRYPOINT ["sh", "/tmp/startUpScript.sh"]

Feel free to pick any other node base image — from https://hub.docker.com/_/node/. For example: node:10.

The startUpScript that is executed whenever the container is started up — that takes care of the initial cloning of the Node application from the Git(Hub) URL to directory /tmp/app and the running of that application using nodemon is shown below. Note the trick (inspired by StackOverflow) to run a script only when the container is ran for the very first time.

#!/bin/sh CONTAINER_ALREADY_STARTED="CONTAINER_ALREADY_STARTED_PLACEHOLDER" if [ ! -e $CONTAINER_ALREADY_STARTED ]; then touch $CONTAINER_ALREADY_STARTED echo "-- First container startup --" # YOUR_JUST_ONCE_LOGIC_HERE cd /tmp # prepare the actual Node app from GitHub mkdir app git clone $GITHUB_URL app cd app #install dependencies for the Node app npm install #start both the reload app and (using nodemon) the actual Node app cd .. (echo "starting reload app") & (echo "start reload";npm start; echo "reload app finished") & cd app; echo "starting nodemon for app cloned from $GITHUB_URL"; nodemon else echo "-- Not first container startup --" cd /tmp (echo "starting reload app and nodemon") & (echo "start reload";npm start; echo "reload app finished") & cd app; echo "starting nodemon for app cloned from $GITHUB_URL"; nodemon fi The startup script runs the live reloader application in the background — using (echo “start reload”;npm start)&. That final ampersand (&) takes care of running the command in the background. This npm start command runs the server.js file in /tmp. This server listens at port 4500 for requests. When a request is received at /reload, the application will execute the gitRefresh.sh shell script that performs a git pull in the /tmp/app directory where the git clone of the repository was targeted.

 

const RELOAD_PATH = '/reload' const GITHUB_WEBHOOK_PATH = '/github/push' var http = require('http'); var server = http.createServer(function (request, response) { console.log(`method ${request.method} and url ${request.url}`) if (request.method === 'GET' && request.url === RELOAD_PATH) { console.log(`reload request starting at ${new Date().toISOString()}...`); refreshAppFromGit(); response.write(`RELOADED!!${new Date().toISOString()}`); response.end(); console.log('reload request handled...'); } else if (request.method === 'POST' && request.url === GITHUB_WEBHOOK_PATH) { let body = []; request.on('data', (chunk) => { body.push(chunk);}) .on('end', () => { body = Buffer.concat(body).toString(); // at this point, `body` has the entire request body stored in it as a string console.log(`GitHub WebHook event handling starting ${new Date().toISOString()}...`); ... (see code in GitHub Repo https://github.com/lucasjellema/docker-node-run-live-reload/blob/master/server.js console.log("This commit involves changes to the Node application, so let's perform a git pull ") refreshAppFromGit(); response.write('handled'); response.end(); console.log(`GitHub WebHook event handling complete at ${new Date().toISOString()}`); }); } else { // respond response.write('Reload is live at path '+RELOAD_PATH); response.end(); } }); server.listen(4500); console.log('Server running and listening at Port 4500'); var shell = require('shelljs'); var pwd = shell.pwd() console.info(`current dir ${pwd}`) function refreshAppFromGit() { if (shell.exec('./gitRefresh.sh').code !== 0) { shell.echo('Error: Git Pull failed'); shell.exit(1); } else { } }

Using the node-run-live-reload image
Now that you know a little about the inner workings of the image, let me show you how to use it (also see instructions here: https://github.com/lucasjellema/docker-node-run-live-reload).

To build the image yourself, clone the GitHub repo and run

docker build -t "node-run-live-reload:0.1" .

using of course your own image tag if you like. I have pushed the image to Docker Hub as lucasjellema/node-run-live-reload:0.1. You can use this image like this:

docker run --name express -p 3011:3000 -p 4505:4500 -e GITHUB_URL=https://github.com/shapeshed/express_example -d lucasjellema/node-run-live-reload:0.1

In the terminal window — we can get the logging from within the container using

docker logs express --follow

After the application has been cloned from GitHub, npm has installed the dependencies and nodemon has started the application, we can access it at <host>:3011 (because of the port mapping in the docker run command):


When the application sources are updated in the GitHub repository, we can use a GET request (from CURL or the browser) to <host>:4505 to refresh the container with the latest application definition:


The logging from the container indicates that a git pull was performed — and returned no new sources:


Because there are no changed files, nodemon will not restart the application in this case.

One requirement at this moment for this generic container to work is that the Node application has a package.json with a scripts.start entry in its root directory; nodemon expects that entry as instruction on how to run the application. This same package.json is used with npm install to install the required libraries for the Node application.

Summary

The next figure gives an overview of what this article has introduced. If you want to run a Node application whose sources are available in a GitHub repository, then all you need is a Docker host and these are your steps:

  1. Pull the Docker image: docker pull lucasjellema/node-run-live-reload:0.1 (this image currently contains the Node 8 runtime, npm, nodemon, a git client and the reloader application) 
    Alternatively: build and tag the container yourself.
  2. Run the container image, passing the GitHub URL of the repo containing the Node application; specify required port mappings for the Node application and the reloader (port 4500): docker run –name express -p 3011:3000 -p 4500:4500 -e GITHUB_URL=<GIT HUB REPO URL> -d lucasjellema/node-run-live-reload:0.1
  3. When the container is started, it will clone the Node application from GitHub
  4. Using npm install, the dependencies for the application are installed
  5. Using nodemon the application is started (and the sources are monitored so to restart the application upon changes)
  6. Now the application can be accessed at the host running the Docker container on the port as mapped per the docker run command
  7. With an HTTP request to the /reload endpoint, the reloader application in the container is instructed to
  8. git pull the sources from the GitHub repository and run npm install to fetch any changed or added dependencies
  9. if any sources were changed, nodemon will now automatically restart the Node application
  10. the upgraded Node application can be accessed

Note: alternatively, a WebHook trigger can be configured. This makes it possible to automatically trigger the application reload facility upon commits to the GitHub repo. Just like a regular CD pipeline this means running Node applications can be automatically upgraded.


Next Steps

Some next steps I am contemplating with this generic container image — and I welcome your pull requests — include:

  • allow an automated periodic application refresh to be configured through an environment variable on the container (and/or through a call to an endpoint on the reload application) instructing the reloader to do a git pull every X seconds.
  • use https://www.npmjs.com/package/simple-git instead of shelljs plus local Git client (this could allow usage of a lighter base image — e.g. node-slim instead of node)
  • force a restart of the Node application — even it is not changed at all
  • allow for alternative application startup scenarios besides running the scripts.start entry in the package.json in the root of the application
Resources

GitHub Repository with the resources for this article — including the Dockerfile to build the container: https://github.com/lucasjellema/docker-node-run-live-reload

My article on my previous attempt at creating a generic Docker container for running a Node application from GitHub: https://technology.amis.nl/2017/05/21/running-node-js-applications-from-github-in-generic-docker-container/

Article and Documentation on nodemon: https://medium.com/lucjuggery/docker-in-development-with-nodemon-d500366e74df and https://github.com/remy/nodemon#nodemon

NPM module shelljs that allows shell commands to be executed from Node applications: https://www.npmjs.com/package/shelljs


          Tech·Ed for Novices, 2008 Edition      Cache   Translate Page      

Originally posted on: http://brustblog.net/archive/2008/02/25/techmiddoted-for-novices-2008-edition.aspx

While I'm waiting to find out if I'll be able to go to Tech·Ed again this year I am still keeping an eye on what's going on with the conference. For last year's Tech·Ed, Microsoft created a "Tips for the Newbie" page (which is apparently no longer available) and I created a follow-on post called Tech·Ed for Novices. Since the conference this year is split into two separate weeks, I thought it was even more important for some tips on how to deal with the information overload and what Tech·Ed has to offer.

Session Types

There are six different session types available at Tech·Ed and each one is very different, so it is up to your individual learning style to decide which ones will be better. The problem is, what are the differences?

Birds of a Feather (BOF)

The Birds of a Feather sessions are not presentations, they are a moderated open-forum environment that promotes discussion and interaction. Because these sessions are led by third-party experts, attendees enjoy free-flowing dialogues about products, technologies, and solutions without necessarily focusing on Microsoft. They are generally fairly small in number, around 20 people or so, but that is an average.

For Tech·Ed Developers, the sessions are organized and facilitated by INETA, while Culminis is responsible for Tech·Ed IT Professionals.

Breakout Session

Breakout sessions are what everyone probably thinks of when they think of a convention like this. These are the speaker-lead presentations. They are usually not very interactive (except for a Question and Answer period at the end) and are a great way to learn about new technologies from some of the industries leading experts. These are the most common type of session you will probably be attending.

Hands-on Lab (HOL)

The Hands-on Labs are self-paced technical labs. They are a great way to get direct access to the technologies being presented at the conference and "play". The labs come with a lab manual and additional resources about the topic as well as lab assistants to help you out if you run into a problem.

Instructor Led Lab (ILL)

Instructor Led Labs are just like the Hands-on Labs, except you have an instructor to help walk you through the lab, answer technical questions and provide some level of interaction. All of the instructors for the Instructor Led Labs are Microsoft Certified Trainers (MCTs), so you can be assured that you are in capable hands.

Interactive Theater (TLC)

The Interactive Theaters are probably not what you might expect. If you plan on going to one, get there early as they fill up quickly. These are very interactive, very informal discussions on a particular topic by someone who is usually very knowledgeable about that topic. These are an excellent way to get some very good, detailed information about a topic.

Lunch Session (LNC)

Lunch sessions, as their name implies, occur during the lunch break and are usually an interactive discussion about a topic. They are a little more formalized than the Birds of a Feather sessions, but not quiet as interactive as the Theater sessions.

Building Your Schedule

While Microsoft generally considers the "Communications Network", also known as CommNet, as the place to start for Tech·Ed, I think the place to start is with the online schedule builder before you get to the event. Hopefully the schedule builder will get a much needed overhaul to make it more user-friendly, but we won't know for another few months.

No matter what, the point is that you need to have an idea of what sessions you are going to before you get there. For Tech·Ed Developers there are 16 technical tracks while Tech·Ed IT Professionals has 20 technical tracks making it difficult to figure out what is important. (Hopefully I'll be able to provide a summary of the track content to help you focus on which tracks are important to you.)

The thing to realize about the tracks is that they are a way to group session content on a large scale. Sessions can, and often do, cross multiple tracks and sessions aren't always in the track you might expect. As a result, don't try to focus on any one track and only go to sessions in that track. It's a good starting point, but that's it.

For example. if you are a solutions architect (or any kind of technical architect), chances are that the majority of your sessions will fall under the "Architecture" track, but that doesn't mean there won't be sessions in the "Developer Tools and Languages" track or the "Web and User Experience" track that aren't important.

The other thing to understand about the tracks is that they are only useful if you want sessions covering a broad (or high-level) technology category. If you want information specific to a certain technology, such as Smart Clients, you will need to spend some extra time looking at each of the tracks and finding the sessions that are relevant.

Deciding on your sessions

Deciding on sessions is not a trivial task and can take at least several hours. When picking sessions, don't choose them based solely on what your employer has you working on. That's right, just because they are paying for you to go does not mean that you should restrict your sessions to what you think is important to them or your current project. Now, I'm not saying you should completely ignore it, either. Usually you're working there (and on that project) because your technical interests and skills match to some degree. If there is a technology that you are using that you feel you need more in-depth information about, then pick some sessions about it. Just remember to also pick sessions that are going to be interesting to you and help you in the long-run of your career. If you don't have any interest in the topic and are going to the session only because your employer told you to (or you think it's what they want you to go to) you aren't going to enjoy the experience and you won't get a whole lot out of it.

Don't forget that some of the sessions repeat on a different day. If the session is going to be repeated the first session should say "Repeated on xxxx" (where xxxx is the day) and the repeat session should say "Repeated from xxxx". The other indicator is that the session "code" for the repeated session should have an "R" at the end of it.

Session Codes

While we're talking about session codes, you should realize that they are just as important as the session titles and descriptions; they give you the "quick glance" way to see some basic information about the session.

The session code has the following format:

AAALLL[R][-SSS]

where

AAA: Technical Track (abbreviated)

LLL: Technical Level

[R]: The actual character "R", only present if the session is a repeat session

[-SSS]: The session (delivery) type. This is the actual "-" character followed by the 3 letter session type abbreviation. Remember, if it doesn't have one then it's most likely a breakout session.

The technical tracks all have a 3 letter abbreviation, which allows you to easily see what track a session belongs to. Remember, sessions can repeat across tracks, but they are assigned only one code. What this means is that the primary track (usually the one listed first) is the one whose code is used, so you may see SOA sessions in the SFT track.

There are two exceptions to this rule for the following session types:

  • Birds of a Feather - BOF
  • Lunch Session - LNC

The technical level is broken down by difficulty and is:

  • 200 - Intermediate
  • 300 - Experienced
  • 400 - Advanced

In reality, only the first number (2, 3, or 4) is important here. The rest of the number is just a sequential number that has no useful meaning (at least to us). Even though some of the other session types also have a technical level associated with them, this level is only used in the session code for Breakout sessions. The rest of the sessions just use a straight sequential number.

The session (or delivery) type is only listed if the session is not a breakout session. The session types are:

  • Hands-on Lab - HOL
  • Instructor Led Lab - ILL
  • Interactive Theater - TLC

Creating your calendar

Just because cloning isn't feasible and you can't be in two places at once doesn't mean you can't schedule multiple sessions at the same time. The best way to build your calendar is to do it in steps.

The first step should be all of the sessions you have an interest in. Don't worry if there is a lot of overlap. My own schedule usually has at least 3 sessions in most of the time slots. The goal here is to get all of the sessions you are interested in put on the calendar.

The second step is to look at each time slot and prioritize the sessions. If you are using Outlook 2007, you can do this using the color categories. (Outlook 2003 users can use the Label feature.) The idea here is that you want to rank the sessions in order of priority interest to you. The ones you most want to attend should be green, second most would be yellow, and third (or more) should be unmarked. This is a great way to see where your interests are heading and helps narrow the field.

It's alright to end up with multiple priority 1 sessions at the same time. Your schedule is not set in stone and it will change. Leading up to the event, Microsoft is still adding sessions and once the conference starts sessions will be canceled, rescheduled and repeat sessions will be added.

Communications Network (CommNet)

No, this isn't like SkyNet (from the Terminator movies or the UK), but it is the central hub for everything that is going on at Tech·Ed. In order to log in, you will need to use the same user name and password that you were given by Microsoft when you registered. (This is the same one you use to access the online schedule builder tool.)

CommNet gives you key logistical information about the conference:

  • conference agenda
  • floor plans
  • bus shuttle schedules
  • city highlights
  • session schedule

At the end of each session, you will be asked to fill out an evaluation survey for the chance to win...something. You do this through CommNet. You can also download presentation materials for your session ahead of time (and sometimes after the session as well). Not all of the sessions get their content up before the session, so you will have to keep checking.

Wireless Internet Access

If you have a laptop or mobile device equipped with 802.11a/b wireless, then you can connect to the Internet throughout most of the convention center. Microsoft has Wireless Help Desks set up throughout the convention center to help configure your laptop or mobile device if you run into problems. Remember, the wireless network is an open, shared network, so there are no security protocols (WPA, WEP, etc.) enabled. Treat this just like you would any other public wireless network.

To be safe, follow the following steps:

  • Install all Microsoft critical patches and updates for your operating system. Visit www.update.microsoft.com/windowsupdate
  • Download and install the latest anti-virus signatures for your software
  • If you suspect that your system is not properly patched, visit the wireless help desk PRIOR to connecting to the network
  • Ensure that your personal firewall software is active
  • Ensure that your system is set to infrastructure wireless use only and is not associating to, or broadcasting, ad-hoc networks
  • Monitor your connections—do not connect to the wireless and wired networks at the same time

Virtual Tech·Ed

Starting with Tech·Ed 2007, Microsoft introduced Virtual Tech·Ed, which is the "one-stop" shop to extend the Tech·Ed experience before, during, and after the event. In addition to Virtual Tech·Ed, you also have Tech·Ed Connect, which allows you to connect with other Tech·Ed attendees before the conference starts.

Using Tech·Ed Connect, you can quickly:

  • Identify other attendees with whom you would like to meet
  • Share information about yourself and your background
  • Schedule meetings that will take place during the conference.

Dealing with the Information Overload

As you can see, there is a ton of information and it can be quite overwhelming for the novice attendee (and even some veterans). The best way to deal with the information overload is to:

  • Plan your schedule so you are going to sessions that you have an interest in.
  • Try to break up the content type, don't do just breakout sessions. Try some Hands-on or Instructor Led Labs, Birds of a Feather sessions, or the Interactive Theaters.
  • Don't worry about missing a session. All attendees will receive a DVD set containing all of the presentations a few months after the conference ends. A lot of the breakout sessions are recorded during the event, so you can actually watch them from the DVD.
  • Hit the exhibit hall. It's a great way to relax between sessions and find out what other companies are working on or what's new from your favorite company. Yes, you will end up getting your badge scanned and probably have to deal with sales calls afterwards, but it can be worth it.
  • Take advantage of the free food. One thing that Tech·Ed is known for is the extreme amount of snacks provided.
  • Go to the Jam Sessions and the event party. They are worth it. I make it a point to go each year and usually bring my family to it. Having them around helps decompress at the end of the day and gives them a nice mini-vacation.

In the end, enjoy the conference. It's a great way to get some excellent technical information and some excellent networking opportunities.

Technorati Tags: TechEd

          How newlines affect Linux kernel performance      Cache   Translate Page      

The linux kernel strives to be fast and efficient. As it is written mostly in C, it can mostly control how the generated machine code looks. Nevertheless, as the kernel code is compiled into machine code, the compiler optimizes the generated code to improve its performance. The kernel code, however, employs uncommon coding techniques, which can fail code optimizations. In this blog-post, I would share my experience in analyzing the reasons for poor code inlining of the kernel code. Although the performance improvement are not significant in most cases, understanding these issues are valuable in preventing them from becoming larger. New-lines, as promised, will be one of the reasons, though not the only one.

New lines in inline assembly

One fine day, I encountered a strange phenomenon: minor changes I performed in the Linux source code, caused small but noticeable performance degradation. As I expected these changes to actually improve performance, I decided to disassemble the functions which I changed. To my surprise, I realized that my change caused functions that were previously inlined, not to be inlined anymore. The decision not to inline these functions seem dubious as they were short.

I decided to further investigate this issue and to check whether it affects other parts of the kernel. Arguably, it is rather hard to say whether a function should be inlined, so some sort of indication of bad inlining decisions is needed. C functions that are declared with the inline keyword are not bound to be inlined by the compiler, so having a non-inlined function that is marked with the inline keyword is not an indication by itself for bad inlining decision.

Arguably, there are two simple heuristics to find functions which were suspiciously not inlined for the wrong reason. One heuristic is to look for short (binary-wise) functions by looking at the static symbols. A second heuristic is to look for functions which appear in multiple translation units (objects), as this might indicate they were declared as inline but were eventually not inlined, and that they are in common use. In both cases, there may be valid reasons for the compiler not to inline functions even if they are short, for example if they are used as a value for a function pointer. However, they can give an indication if something is "very wrong" in how inlining is performed, or more correctly, ignored.

In practice, I used both heuristics, but in this post I will only use the second one to check whether inlining decisions seem dubious. To do so I rebuild the kernel, using the localyesconfig make target to incorporate the modules into the core. I ensure the "kernel hacking" features in the config are off, as those tend to blow the size of the code and rightfully cause functions not to be inlined. I then looked for static function which had the most instances in the built kernel:

$ nm --print-size ./vmlinux | grep ' t ' | cut -d' ' -f2- | sort | uniq -c | grep -v '^ 1' | sort -n -r | head -n 5 <u>Instances Size Function Name</u> 36 0000000000000019 t <strong>copy_overflow</strong> 8 000000000000012f t jhash 8 000000000000000d t <strong>arch_local_save_flags</strong> 7 0000000000000017 t dst_output

6 000000000000004e t <strong>put_page</strong>

As seen, the results are suspicious. As mentioned before, in some cases there are good reasons for functions not to be inlined. jhash () is a big function (303 bytes) so it is reasonable for it is not to be inlined. dst_output () address is used as a function pointer, which causes it not to be inlined. Yet the other functions seem to be great candidates for inlining, and it is not clear why they are not inlined. Let's look at the source code of copy_overflow (), which has many instances in the binary:

static inline void copy_overflow(int size, unsigned long count) { WARN(1, "Buffer overflow detected (%d < %lu)!\n", size, count);

}

Will the disassembly tell us anything?

0xffffffff819315e0 <+0>: push %rbp 0xffffffff819315e1 <+1>: mov %rsi,%rdx 0xffffffff819315e4 <+4>: mov %edi,%esi 0xffffffff819315e6 <+6>: <strong>mov $0xffffffff820bc4b8,%rdi</strong> 0xffffffff819315ed <+13>: mov %rsp,%rbp 0xffffffff819315f0 <+16>: <strong>callq 0xffffffff81089b70 <__warn_printk></strong> 0xffffffff819315f5 <+21>: <strong>ud2</strong> 0xffffffff819315f7 <+23>: pop %rbp

0xffffffff819315f8 <+24>: retq

Apparently not. Notice that out of the 9 assembly instructions that are shown above, 6 deal with the function entry and exit - for example, updating the frame pointer, and only the 3 bolded ones are really needed.

To understand the problem, we must dig deeper and look at the warning mechanism in Linux. In x86, this mechanism shares the same infrastructure with the bug reporting mechanism. When a bug or a warning are triggered, the kernel prints the filename and the line number in the source-code that triggered the bug, which can then used to analyze the root-cause of the bug. A naive implementation, however, would cause the code-cache to be polluted with the this information as well as the function call to the function that prints the error message, consequently causing performance degradation.

Linux therefore uses a different scheme by setting an exception triggering instruction ( ud2 on x86) and saving the warning information in a bug table that is set in a different section in the executable. Once a warning is triggered using the WARN() macro, an exception is triggered and the exception handler looks for the warning information - the source-code filename and line - in the table.

Inline assembly is used to save this information in _BUG_FLAGS() . Here is its code after some simplifications to ease readability:

asm volatile("1: ud2\n" "<strong>.pushsection</strong> __bug_table,\"aw\"\n" "2: .long 1b - 2b\n" /* bug_entry::bug_addr */ " .long %c0 - 2b\n" /* bug_entry::file */ " .word %c1\n" /* bug_entry::line */ " .word %c2\n" /* bug_entry::flags */ " .org 2b+%c3\n" "<strong>.popsection</strong>" : : "i" (__FILE__), "i" (__LINE__), "i" (flags),

"i" (sizeof(struct bug_entry)));

Ignoring the assembly shenanigans that this code uses, we can see that in practice it generates a single ud2 instruction. However, the compiler considers this code to be "big" and consequently oftentimes does not inline functions that use WARN() or similar functions.

The reason turns to be the newline characters (marked as '\n' above). The kernel compiler, GCC, is unaware to the code size that will be generated by the inline assembly. It therefore tries to estimate its size based on newline characters and statement separators (';' on x86). In GCC, we can see the code that performs this estimation in the estimate_num_insns() function:

int estimate_num_insns (gimple *stmt, eni_weights *weights) { ... case GIMPLE_ASM: { int count = asm_str_count (gimple_asm_string (as_a <gasm *> (stmt))); /* 1000 means infinity. This avoids overflows later with very long asm statements. */ if (count > 1000) count = 1000; return count; } ..

}

Note that this pattern, of saving data using inline assembly, is not limited to bugs and warnings. The kernel uses it for many additional purposes: exception tables, that gracefully handle an exception that is triggered inside the kernel; alternative instructions table, that tailors the kernel on boot-time to the specific CPU architecture extensions that are supported; annotations that are used for stack metadata validation by objtool and so on.

Before we get to solving this problem, a question needs to be raised: is the current behavior flawed at all? Eventually, the size of the kernel will increase if functions that use WARN(), for example, will be inlined. This increase in size can cause the kernel image to be bigger, and since the Linux kernel cannot be paged out, will also increase memory consumption. However, the main reason that the compiler strives to avoid inflation of the code size is to avoid pressure on the instruction cache, whose impact may offset inlining benefits. Moreover, the heuristics of other compiler optimizations (e.g., loop optimizations) depend on the size of the code.

Solving the problem is not trivial. Ideally, GCC would have used an integrated assembler, similarly to LLVM , which would give better estimation of the generated code size of inline assembly. Experimentally, LLVM seems to make the right inlining decisions and is not affected by new-lines or data that is set in other sections of the executable. Interestingly, it appears to do so even when the integrated assembler is not used for assembly. GCC, however, uses the GNU assembler after the code is compiled, which prevents it from getting a correct estimation of the code size.

Alternatively, the problem could have been solved by overriding GCC's code size estimation through a directive or a built-in function. However, looking at GCC code does not reveal a direct or indirect way to achieve this goal.

One may think that using the always_inline function attribute to force the compiler to inline functions would solve the problem. It appears that some have encountered the problem of poor inlining decisions in the past, without understanding the root-cause and used this solution . However, this solution has several drawbacks. First, it is hard to make and maintain these annotations. Second, this solution does not address other code optimizations the rely on code-size estimation. Third, the kernel uses various configurations and supports multiple CPU architectures, which may require a certain function to be inlined in some setups and not inlined in other. Finally, and most importantly, using always_inline can just push the problem upwards to calling functions, as we will later see.

Therefore, a more systematic solution is needed. The solution comes in the form of assembly macros that are set to hold the long assembly code, and use a single line inside the inline assembly that calls the macro. This solution does not only improve the generated machine code, but makes the assembly code more readable, as it prevents various quirks that are required in inline assembly, for example new-line characters. Moreover, in certain cases this change allows to consolidate the currently separate implementations that are used in C and assembly, which eases code maintenance.

Addressing the issue shows a performance improvement of tens of cycles for certain system calls, which are indeed not too notable. After addressing these issues, we see copy_overflow() and other functions disappear from the commonly non-inlined inline functions list.

<u>Instances Size Function Name</u> 9 000000000000012f t jhash 8 0000000000000011 t <strong>kzalloc</strong> 7 0000000000000017 t dst_output 5 000000000000002f t <strong>acpi_os_allocate_zeroed</strong>

5 0000000000000029 t <strong>acpi_os_allocate</strong>

However, we got some new ones. Lets try to understand where do they come from.

Constant computations and inlining

As shown, kzalloc () is not always inlined, although its code is very simple.

static inline void *kzalloc(size_t size, gfp_t flags) { return kmalloc(size, flags | __GFP_ZERO);

}

The assembly, again does not provide any answers as to why it is not inlined:

0xffffffff817929e0 <+0>: push %rbp 0xffffffff817929e1 <+1>: <strong>mov $0x14080c0,%esi</strong> 0xffffffff817929e6 <+6>: mov %rsp,%rbp 0xffffffff817929e9 <+9>: <strong>callq 0xffffffff8125d590 <__kmalloc></strong> 0xffffffff817929ee <+14>: pop %rbp

0xffffffff817929ef <+15>: retq

The answer to our question lies in kmalloc(), which is called by kzalloc() and is considered to have many instructions by GCC heuristics. kmalloc() is inlined since it is marked with the always_inline attribute, but its estimated instruction count is then attributed to the calling function, kzalloc() in this case. This result exemplifies why the use of the always_inline attribute is not a sufficient solution for code inlining problem.

Still, it is not clear why GCC estimates that kmalloc() would be compiled into many instructions. As shown, it is compiled into a single call to __kmalloc(). To answer this question, we need to follow kmalloc() code, which eventually uses the ilog2() macro to compute the log2 of an integer, in order to compute the page allocation order.

Here is a and shortened version of ilog2():

#define ilog2(n) \ ( \ __builtin_constant_p(n) ? ( \ /* <strong>Optimized version for constants</strong> */ \ (n) < 2 ? 0 : \ (n) & (1ULL << 63) ? 63 : \ (n) & (1ULL << 62) ? 62 : \ ... (n) & (1ULL << 3) ? 3 : \ (n) & (1ULL << 2) ? 2 : \ 1 ) : \ /* <strong>Another version for non-constants</strong> */ \ (sizeof(n) <= 4) ? \ __ilog2_u32(n) : \ __ilog2_u64(n) \

}

As shown, the macro first uses the built-in function __builtin_constant_p () to determine whether n is known to be a constant during compilation time. If n is known to be constant, a long series of conditions is evaluated to compute the result during compilation time, which allows further optimizations. Otherwise, if n is not known to be constant, a short code is emitted to compute during runtime the result. Yet, regardless of whether n is constant or not, all of the conditions in the ilog2() macro are evaluated during compilation time and do not translate into any machine code instructions.

However, although the generated code is efficient, it causes GCC, again, to estimate the number of instructions that ilog2() takes incorrectly. Apparently, the number of instructions is estimated before inlining decisions take place, and in this stage the compiler usually still does not know whether n is constant. Later, after inlining decisions are performed, GCC cannot update the instruction count estimation accordingly.

This inlining problem is not as common as the previous one, yet it is not rare. Bit operations (e.g., test_bit()) and bitmaps commonly use __builtin_constant_p() in the described manner. As a result, functions that use these facilities, for example cpumask_weight(), are not inlined.

A possible solution for this problem is to use the built-in __builtin_choose_expr() to test __builtin_constant_p() instead of using C if-conditions and conditional operators (?:) :

#define ilog2(n) \ ( \ <strong>__builtin_choose_expr</strong>(__builtin_constant_p(n), \ ((n) < 2 ? 0 : \ (n) & (1ULL << 63) ? 63 : \ (n) & (1ULL << 62) ? 62 : \ ... (n) & (1ULL << 3) ? 3 : \ (n) & (1ULL << 2) ? 2 : \ 1 )), \ (sizeof(n) <= 4) ? \ __ilog2_u32(n) : \ __ilog2_u64(n) \

}

This built-in is evaluated earlier in the compilation process, before inlining decisions are being made. Yet, there is a catch: as this built-in is evaluated earlier, GCC is only able to determine that an argument is constant for constant expressions, which can cause less efficient code to be generated. For instance, if a constant was given as a function argument, GCC will not be able to determine it is constant. In the following case, for example, the non-constant version will be used:

int bar(int n) { return ilog2(n) } int foo(int n) { return bar(n); }

v = foo(bar(5)); /* will use the non-constant version */

It is therefore questionable whether using __builtin_choose_expr() is an appropriate solution. Perhaps it is better to just mark functions such as kzalloc() with the always_inline attribute. Compiling using LLVM reveals, again, that LLVM inlining decisions are not negatively affected by the use of __builtin_constant_p().

Function attributes

Finally, there are certain function attributes that affect inlining decision. Using function attributes to set an optimization levels for specific functions can prevent the compiler from inlining the functions or functions that are called by them. The Linux kernel rarely uses such attributes, but one of its uses is in the KVM function vmx_vcpu_run () which is a very hot function that launches or resumes the virtual machine. The use of the optimization attribute in this function is actually just to prevent cloning of the function. Its side-effect is, however, that all the functions it uses are not inlined, including, for example the function to_vmx() :

0x0000000000000150 <+0>: push %rbp 0x0000000000000151 <+1>: <strong>mov %rdi,%rax</strong> 0x0000000000000154 <+4>: mov %rsp,%rbp 0x0000000000000157 <+7>: pop %rbp

0x0000000000000158 <+8>: retq

This function just returns as an output the same argument it got as an input. Not inlining functions that are called by vmx_vcpu_run() induces significant overhead, which can be as high as 10% for a VM-exit.

Finally, the cold function attribute causes inlining to be done less aggressively. This attribute informs the compiler that a function is unlikely to be executed, and the compiler, among other things, optimizes these functions for size rather than speed, which can result in very non-aggressive inlining decisions. All the __init and __exit functions, which are used during the kernel and modules (de)initializations are marked as cold . It is questionable whether this is the desired behavior.

Conclusions

Despite the fact that C appears to give us great control over the generated code, it is not always the case. Compiler extensions may be needed to give programmers greater control. Tools that analyze whether the generated binary is efficient, considering the source code, may be needed. In the meanwhile, there is no alternative to manual inspection of the generated binary code.

Thanks to Linus Torvalds, Hans Peter Anvin, Masahiro Yamada, Josh Poimboeuf, Peter Zijistra, Kees Cook, Ingo Molnar and others for their assistance in the analysis and in solving this problem.


          #TechTuesday with CNET’s Ian Sherr: How To Protect Your Facebook Account From Cloning      Cache   Translate Page      
CNET’s Ian Sherr joins Bill and Wendy over the phone to share the latest in tech news. They talk about the Facebook friend request scam that’s going around and how to avoid it and protect your account. Plus, Ian gives a preview of what to expect from today’s Google Pixel 3 launch event. You can find Bill and Wendy on Twitter,  Facebook, and Instagram. The Bill and Wendy Show airs Monday through Friday from 10 a.m. to noon, then streaming […]
          Bill and Wendy Full Show 10.9.18: Rock on      Cache   Translate Page      
Big news! The Rock & Roll Hall of Fame has announced its 2019 nominees! Bill and Wendy pick out their top picks. They also talk about Nikki Haley’s resignation as US ambassador. CNET’s executive editor Ian Sherr explains Facebook account cloning and how to avoid it. A good friend of the show, Tim Ryan drops by with Keagan Hernandez, and Keagan’s mother Leah (Hernandez) Larmond. Kegan is organizing a 5K Run called “The First Annual 5K Run for RyRy,” in […]
          The "Facebook Do Not Accept a 2nd Friend Request From Me" Messages      Cache   Translate Page      

There is no need to share the "Do Not Accept a 2nd Friend Request From Me" messages that are circulating on Facebook. What the messages are referring to is called "Profile Cloning," which is a form of scam used by cybercriminals to clone accounts and then trick the owners' friends into accepting a second or new friend request. Continue reading...


          The "Facebook Do Not Accept a New Friendship From Me" Messages      Cache   Translate Page      

There is no need to share the "Do Not Accept a New Friendship From Me" messages that are circulating on Facebook. What the messages are referring to is called "Profile Cloning," which is a form of scam used by cybercriminals to clone accounts and then trick the owners' friends into accepting a second or new friend request. Continue reading...


          The "Facebook Account is being Cloned" Messages      Cache   Translate Page      

There is no need to share the "Facebook Account is being Cloned" messages that are circulating on Facebook. What the messages are referring to is called "Profile Cloning," which is a form of scam used by cybercriminals to clone accounts and then trick the owners' friends into accepting a second or new friend request. Continue reading...


          Global Restriction Endonucleases Market Will Grow at a CAGR 5.6% and Reach USD 300 Million by 2023, from USD 210 Million in 2017      Cache   Translate Page      

Lewes, DE -- (SBWIRE) -- 10/09/2018 -- Global Restriction Endonucleases Market will register a 5.6% CAGR in terms of revenue, reach US$ 300 million by 2023, from US$ 210 million in 2017.

Restriction Endonuclease is an enzyme that cuts DNA at or near specific recognition nucleotide sequences known as restriction sites. They are the enzymes that are found in the bacteria and are harvested from them for their use in research and commercial aspects. Restriction enzymes are commonly classified into four types, which differ in their structure and whether they cut their DNA substrate at their recognition site, or if the recognition and cleavage sites are separate from one another.

Till date more than 10,000 bacteria are screened for the presence of restriction enzymes and currently there are more than 2,500 restriction enzymes have been discovered along with over 250 distinct specificities in sequences. These enzymes are used in conventional cloning, deciphering epigenetic modifications, construction of DNA libraries and in vivo gene editing. The end users mainly are Academic & Research Institutes, Hospitals & Diagnostic Centers, and Pharmaceutical & Biotechnology companies.

The Restriction Endonuclease industry is relatively concentrated, and the players mainly come from North America and Western Europe. In the world wide, major manufactures mainly are New England Biolabs, Thermo Fisher Scientific, Takara Bio, Illumina, Agilent Technologies, Roche, GE Healthcare, Promega Corporation, Qiagen, Jena Biosciences and etc.

Over the next five years, Study projects that Restriction Endonucleases will register a 5.6% CAGR in terms of revenue, reach US$ 300 million by 2023, from US$ 210 million in 2017.

This report studies the global market, especially in North America, Europe, Asia-Pacific, South America, Middle East and Africa, focuses on the top 5 players in each region, with sales, price, revenue and market share from 2013 to 2018, the top players:
New England Biolabs
Thermo Fisher Scientific
Takara Bio
Illumina
Agilent Technologies
Roche
GE Healthcare
Promega Corporation
Qiagen
Jena Biosciences

Market Segment by Regions, this report splits Global into several key Regions, with sales, revenue, market share of top players in these regions, from 2013 to 2018 (forecast), like
North America (United States, Canada and Mexico)
Asia-Pacific (China, Japan, Southeast Asia, India and Korea)
Europe (Germany, UK, France, Italy and Russia etc.)
South America (Brazil, Chile, Peru and Argentina)
Middle East and Africa (Egypt, South Africa, Saudi Arabia)

Split by Product Types, with sales, revenue, price, market share of each type, can be divided into
Type I
Type II
Type III
Type IV
Others

Split by applications, this report focuses on sales, market share and growth rate in each application, can be divided into
Academic & Research Institutes
Hospitals & Diagnostic Centers
Biopharmaceutical
Other

Spanning over 121 pages "Forecast of Global Restriction Endonucleases Players Market 2023" report covers Restriction Endonucleases Market Overview, Global Restriction Endonucleases Sales, Revenue (Value) and Market Share by Players, Global Restriction Endonucleases Sales, Revenue (Value) by Regions, Type and Application (2013-2018), North America Top 5 Players Restriction Endonucleases Sales, Revenue and Price, Europe Top 5 Players Restriction Endonucleases Sales, Revenue and Price, Asia-Pacific Top 5 Players Restriction Endonucleases Sales, Revenue and Price, South America Top 5 Players Restriction Endonucleases Sales, Revenue and Price, Middle East & Africa Top 5 Players Restriction Endonucleases Sales, Revenue and Price, Global Restriction Endonucleases Players Profiles/Analysis, Global Restriction Endonucleases Market Forecast (2018-2023), Restriction Endonucleases Manufacturing Cost Analysis, Industrial Chain, Sourcing Strategy and Downstream Buyers, Marketing Strategy Analysis, Distributors/Traders, Market Effect Factors Analysis, Research Findings and Conclusion, Appendix.

Please visit this link for more details: https://www.marketresearchreports.com/mrrpb2/forecast-global-restriction-endonucleases-players-market-2023

Find all Pharma and Healthcare Reports at: https://www.marketresearchreports.com/pharma-healthcare

For related reports please visit: https://www.marketresearchreports.com/search/site/Restriction%2520Endonucleases

Read our Interactive Market Research Blog

About MarketResearchReprots.com
MarketResearchReprots.com is world's largest store offering quality market research, SWOT analysis, competitive intelligence and industry reports. We help Fortune 500 to Start-Ups with the latest market research reports on global & regional markets which comprise key industries, leading market players, new products and latest industry analysis & trends.

Contact us for your market research requirements: https://www.marketresearchreports.com/contact

For more information on this press release visit: http://www.sbwire.com/press-releases/global-restriction-endonucleases-market-will-grow-at-a-cagr-56-and-reach-usd-300-million-by-2023-from-usd-210-million-in-2017-1061147.htm

Media Relations Contact

Sudeep Chakravarty
Director - Operations
MarketResearchReprots.com
Telephone: 1-302-703-9904
Email: Click to Email Sudeep Chakravarty
Web: https://www.marketresearchreports.com/mrrpb2/forecast-global-restriction-endonucleases-players-market-2023

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000


           Jake Bentley: Plans to start Saturday       Cache   Translate Page      
Jake Bentley: Bentley (knee) said that he expects to start Saturday against Texas A&M, David Cloninger of The Charleston Post and Courier reports. Visit RotoWire.com for more analysis on this update.
          Face of Chaos      Cache   Translate Page      

Oh wait... I mean Mankind. No, scratch that, I DO mean Chaos.

 

I get the whole "you're on your own" trend in games, I really do. I also get the open world, PvP type of gaming experience, I really do. I also get the "overly complex-will take weeks to begin to figure out a small fraction of" in games as well.

 

What I don't get, is combining all of those into one.

 

The game starts off alright. You have a few of your standard "getting to know you" type of quest. At one point, though, the quests dry up and the game pretty much leaves you stranded in a cold, harsh world, with little-to-nothing to go on and only a vague idea of what to do.

 

This isn't too bad if the world was at least a familiar environment, you'd be able to identify things for what they are; a cave is a cave, a forest is a forest, etc. You'd have some idea of what they are and, unless you've lived under a dark rock for your entire gaming life, have some idea as to what to expect.

 

Your first venture into the unknown of FoM is going to be anything but pretty... or familiar. With a generic looking, but foreign environment compared to most games,  you'll fin yourself deciding to explore. See that guy over there? You could ask him for directions... oh wait. What's that? He's shooting at you? Well, go run back to... err... um... run back to.... *something* Never mind, looks like you're dead. At least the Cloning lab is familiar territory.

 

So much for the grand idea of roleplaying and being able to take on the role of any number of professions you would like to be!

 

Want to be a drug dealer? Why not! Might as well put that time invested in learning the skill to some use and go out on the streets and look for clients... oh looks like they just randomly killed you again. Scratch that idea off. 

 

For a game that touts itself on allowing you to do or be anything, you'll soon find that your options are extremely limited. The second you decide to break from the norm and go out on your own, you'll find yourself getting shot at and dying in, what looks like it should be a relatively safe area. Guess the rest of the players decided that "anything" they wanted to be were homicidal asshats. Then you'll end up spawning back at the original location, minus whatever you lost in dying and have to follow the generic, safe path like all of the other AFK miners are. Rinse and repeat.

 

The locations in this game require too much explaining as to what they are and why they're there. With the exception of the outside portion, they look like they've all been created form the same bland, generic "future" model and one looks just like the next once you're indoors.

 

You like being able to tell where you are and what things are by looking at them and being able to identify their purpose? You enjoy seeing that guy holding a hammer, at a hot forge with an anvil next to him and knowing, instinctively you're near a blacksmith? NOT IN THIS GAME!

 

It's nothing but trial and error, with the errors resulting in a random PK, requiring need for more trial, which results in more errors which... etc. etc. etc. etc. etc. The game thrives off of this self-perpetuating cycle while taking little time to explain any of it to you.

 

Sure, you can queue up skills to learn and advance your knowledge with, but what good are they if you have no real knowledge of what they're supposed to be used for? "Hey, that mining rig thingy the tutorial mentioned, sounds kinda neat. How do I get one and use it?" Pffft damned if I know! But, ya know, good luck finding one and all on a random planet from a random terminal while randomly getting shot in the face trying to figure out where you are!! If you DO manage to get one... good luck trying to figure out how and where to use it!

 

I've heard people in-game comment about he game "dying". Well, no-shit-sherlock! Every new player is pretty much bitch-slapped across the face by either the piss-poor "tutorials", the turn-of-the-millennium graphics, or other players who just want to kill you then complain about the lack of player base. 

 

This game deserves it's excruciatingly slow death.


          What Do I Want To Believe? – Mike Williams      Cache   Translate Page      
So now, they're just going to grow Sheeple!? Perhaps you missed the pig-human hybrid but, I think it's time to get offended! Now matter how outlandish human cloning sounds to people, these experiments are going on all day, every day, on earth and in space. You probably have heard the term, stem cell research, well,…
          Cloning 1, 2, 3      Cache   Translate Page      
Video: Cloning 1, 2, 3
Watch This Video!
Studio: Celebrity Video Distribution
Cloning 1,2,3 is a distracting, beguiling way to face our future. The hilarious prospects and zany visions help eliminate the fears and doubts. I believe the future will continue to survive will brilliant and unique discoveries to make the world stronger and more intelligent. There is no turning back. If we do not do it, someone more intelligent will. Let's have some fun and check out this whimsical method.

          ACID Pro 8.0.7      Cache   Translate Page      

ACID Pro 8 to idealne narzędzie do tworzenia muzyki typu loop-based, które pozwala na produkcję własnych, oryginalnych kompozycji. Najczęściej jest wykorzystywany do tworzenia utworów, remiksów, oprawy dźwiękowej dla potrzeb prezentacji multimedialnych, filmów i Internetu. Program cieszy się ogromnym powodzeniem wśród realizatorów produkcji radiowych, wideo i TV ze względu na możliwość stworzenia wysokiej jakości podkładu muzycznego w bardzo krótkim czasie. ACID jest świetnym narzędziem do zmiany barwy dźwięku w czasie rzeczywistym, jego rozciągania i kwantyzacji wydarzeń MIDI oraz audio. Prosta obsługa z mnóstwem zaawansowanych funkcji, pozwola na wszystko - od szybkiego tworzenia utworów do skomplikowanego udźwiękawiania złożonych produkcji.

Podstawowe funkcje

Ponad 1000 pętli w różnych gatunkach
Media Manager
Metronom do odtwarzania i nagrywania
Dostosowanie interfejsu użytkownika i skrótów klawiszowych
Importowanie formatu Flash [swf]
Rozszerzone możliwości podglądu mediów
Zapisywanie ścieżek projektu w gotowych plikach
Nieograniczona ilość ścieżek audio i MIDI
Regulacja wysokości i tempa w czasie rzeczywistym
Podgląd pętli w czasie rzeczywistym
Wsparcie dla innych metrów
Obsługa dźwięku 24 bit/192 kHz
Obsługa sterowników ASIO
Potoki master, dodatkowy, syntezy i efektów
Obsługa wielu formatów plików
Nagrywanie na żywo
Autozapis
Obsługa wielu procesorów
Obsługa dwu monitorów
Nieskończona historia cofnij/powtórz
Wsparcie zewnętrznego odsłuchu
Import plików WAV, Windows Media i MP3
Możliwość pobierania mediów z sieci

Miksowanie i edycja

Osadzalne foldery ścieżek dla łatwiejszej organizacji projektu
Odwracanie wydarzeń w czasie rzeczywistym
Przekierowanie z potoku do potoku
Monitoring downmiksu
Ulepszona obsługa rozciągania w czasie i wykrywania rytmu
Wstawianie pojedynczych sampli w czasie rzeczywistym
Dostosowywalne foldery mediów dla projektu
Zmiany obwiedni nakładane na kilka wybranych wydarzeń

Miksowanie w trybie 5.1

Beatmapper
Chopper
Skróty klawiszowe do odtwarzania i wyciszania ścieżek
Udźwiękawianie filmów [avi, mov, wmv]
Edycja kilku ścieżek jednocześnie
Obwiednie ASR [narastania, podtrzymania, zanikania]
Znaczniki zmiany tempa, metrum i klucza
Metaznaczniki poleceń
Dodawanie znaczników w czasie rzeczywistym
Obwiednie głośności i panningu
Przypisywanie tempa i klucza

Efekty i MIDI

Kwantyzacja Groove Mapping oraz Groove Cloning
Obsługa ReWire [mikser i urządzenie]
Obsługa efektów VST
Obsługa wieloportowej syntezy VSTi
Wyrównywanie do siatki w trybie edycji
Efekty DirectX oparte na tempie
Menadżer wtyczek
Polecenie pominięcia efektów

Edytor MIDI

Edycja listy wydarzeń midi i nagrywanie krokowe
Rozciągnięcie w czasie muzyki MIDI
Tworzenie i wyzwalanie z użyciem kodów czasowych MIDI [MTC]
Bezpośrednie odnośniki do edytorów audio/MIDI
Obsługa syntezy DLS 1 i 2
Obsługa VSTi
Obsługa wtyczek DirectX
Automatyzacja efektów
Ponad 20 efektów DirectX w czasie rzeczywistym
32 dostosowywalne szeregi efektów
26 potoków efektowych
Potoki ścieżek efektowych

Automatyczne efekty DirectX

EQ
Filtr rezonansowy
Flange/Wah/Phase

Efekty DirectX oparte na tempie

Simple Delay
Modulacja amplitudy
Chorus

Efekty DirectX

Kompresja
Zmiana wysokości
Chorus
Delay/Echo
Noise Gate
Multiband Dynamics
Graphic Dynamics
Equalizer parametryczny
Equalizer paragraficzny
Equalizer graficzny
Modulacja amplitudy
Flange/Wah-Wah/Phase
Wygładź/Wyostrz
Gap/Snip
Vibrato
Zakłócenia

Eksport i wypalanie płyt

Nowa zintegrowana funkcja wypalania płyt w trybie DAO
Ekstrakcja muzyki z płyt CD
Eksport do formatów: WAV, MP3, WMA, WMV, RM, WAV, AIF, PCA oraz Sony NetMD
Publikacja muzyki na ACIDplanet.com

Wymagania systemowe

Microsoft Windows 2000 lub XP
Procesor o zegarze 800 MHz [1 GHz przy edycji filmów]
200 MB wolnej przestrzeni dyskowej do instalacji programu
600 MB przestrzeni dyskowej (opcjonalnie)
256 MB pamięci RAM
Karta dźwiękowa kompatybilna z Windows
Napęd CD-ROM [dla instalacji]
Obsługiwana nagrywarka CD-R/RW [dla wypalania płyt]
DirectX 8 lub późniejszy
Internet Explorer 4.0 lub późniejszy


          Research Technician - RNAi/CRISPR      Cache   Translate Page      
MA-Boston, Description: Research Technician - Looking for a temporary Research Technician. Boston location. The person selected will be responsible for: • molecular cloning • RNAi and CRISPR/Cas9 technology • mammalian cell culture • establishment of stable or inducible knockdown / knockout mammalian cell lines • fluorescence microscopy / time-lapse imaging (desired skill but not essential) BS, MS, or PhD Ke
          Research Technician - RNAi/CRISPR      Cache   Translate Page      
MA-Boston, Description: Research Technician - Looking for a temporary Research Technician. Boston location. The person selected will be responsible for: • molecular cloning • RNAi and CRISPR/Cas9 technology • mammalian cell culture • establishment of stable or inducible knockdown / knockout mammalian cell lines • fluorescence microscopy / time-lapse imaging (desired skill but not essential) BS, MS, or PhD Ke
          SuperDuper! 3.2.2 – Advanced disk cloning/recovery utility.      Cache   Translate Page      
SuperDuper! is an advanced, yet easy to use disk copying program. It can, of course, make a straight copy, or “clone” — useful when you want to move all your data from one machine to another, or do a simple backup. In moments, you can completely duplicate your boot drive to another drive, partition, or image […]
          Police say "don't panic" over Facebook account cloning hoax      Cache   Translate Page      
Police are urging social media users not to panic over messages claiming their Facebook account has been cloned.
          SuperDuper! 3.2.2 - Advanced disk cloning/recovery utility. (Shareware)      Cache   Translate Page      

SuperDuper! is an advanced, yet easy to use disk copying program. It can, of course, make a straight copy, or "clone" -- useful when you want to move all your data from one machine to another, or do a simple backup. In moments, you can completely duplicate your boot drive to another drive, partition, or image file.

Clones for safety. To ensure you can safely roll back a system after the unexpected occurs. With a few clicks, you can easily "checkpoint" your system, preserving your computer's critical applications and files while you run on a working, bootable copy. If anything goes wrong, just reboot to the original. When you do, your current Documents, Music, Pictures -- even iSync data -- are available! You can get back to work immediately.

Clones for industry! SuperDuper has enough features to satisfy the advanced user, too. Its simple-but-powerful Copy Script feature allows complete control of exactly what files get copied, ignored, even aliased ("soft linked" for the Unix inclined) from one drive to another.



Version 3.2.2:
Enhancements:
  • Automatic Wake for scheduled copies. No matter how many schedules you may have, and no matter how complex the number of events, SuperDuper! will automatically wake the system before it’s time for the copy to start, eliminating the need to set one manually (and allowing more than one time, since you can only have a single wake in the Energy Saver preference pane)
  • Smart Delete improvements for extremely full drive


  • OS X 10.10 or later



More information

Download Now
          Production Artist - Visual Merchandising      Cache   Translate Page      
TX-Grapevine, SUMMARY Working with general supervision, the Visual Merchandising Production Artist will assist the visual merchandising team in the production process from the beginning to the end and includes photo cloning, illustration, plan-o-gram documentation and administrative support. Utilizes Adobe Suites (InDesign, Illustrator, Photoshop) and additional systems to complete job functions. Applicant must
          Sean Paul and Chi Ching Ching debate cloning themselves, Jamaican parties, and more on Would You Rather      Cache   Translate Page      
The dancehall artists come up with their own “broke hand dance.”
          Comment on Episode-2308- How Close is the Original US Government to an Anarchy by Ethan L.      Cache   Translate Page      
Shows just how far we've fallen. All the 'anti-government' types that preach about the Constitution today would have been seen as the imperialist/monarchists in the 18th century. In fact, the formation of our government was so controversial that the anti-federalists were afraid of using their real names when writing their thesis, which gave rise to the legends of 'Brutus', 'Publius', and 'Cato' who wrote those anti-constitution rants so long ago. Either way, it's clear the Madisons and Hamiltons certainly won, while the Jeffersonians have taken quite the beating. Now we all bow in the name of 'unitary executive authority'. The Emperor doesn't need clothes when he has access to 3d printers, sound cannons and cloning pods. My education in all of this started in my 8th grade Civics class. Do they even teach Civics any more in public school?    
          Disk Doctors Drive Cloning 1 0 0 10 + Crack GVHYJB}      Cache   Translate Page      
none
          Police need help to ID man suspected of cloning credit card      Cache   Translate Page      
BUTLER — Police in the Village of Butler are attempting to identify a man suspected of using a credit card to make unauthorized purchases. The victim reported to police that her TCF Bank Visa check card number was used to make those purchases. That is even though she still has possession of the card. Numerous transactions occurred in Milwaukee County. PHOTO GALLERY Officials say the suspect used a blue Chase Bank card with the name Donald Pierce. The suspect presented an Illinois […]
          Molecular Biologist      Cache   Translate Page      
MD-Edgewood, Molecular Biologist Office:Edgewood Chemical Biological Center Location:Edgewood, MD Requisition No:10.10.18b Key Words: molecular biology, cloning, bacteriology, gene editing, cell sorting, cell culture Project Overview: Excet Inc. is seeking to hire a research scientist with expertise in molecular biology laboratory techniques and general biological research. The ideal candidate shall demonstrat
          Molecular Biologist      Cache   Translate Page      
MD-Edgewood, Molecular Biologist Office:Edgewood Chemical Biological Center Location:Edgewood, MD Requisition No:10.10.18b Key Words: molecular biology, cloning, bacteriology, gene editing, cell sorting, cell culture Project Overview: Excet Inc. is seeking to hire a research scientist with expertise in molecular biology laboratory techniques and general biological research. The ideal candidate shall demonstrat


Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10
Site Map 2018_08_11
Site Map 2018_08_12
Site Map 2018_08_13
Site Map 2018_08_15
Site Map 2018_08_16
Site Map 2018_08_17
Site Map 2018_08_18
Site Map 2018_08_19
Site Map 2018_08_20
Site Map 2018_08_21
Site Map 2018_08_22
Site Map 2018_08_23
Site Map 2018_08_24
Site Map 2018_08_25
Site Map 2018_08_26
Site Map 2018_08_27
Site Map 2018_08_28
Site Map 2018_08_29
Site Map 2018_08_30
Site Map 2018_08_31
Site Map 2018_09_01
Site Map 2018_09_02
Site Map 2018_09_03
Site Map 2018_09_04
Site Map 2018_09_05
Site Map 2018_09_06
Site Map 2018_09_07
Site Map 2018_09_08
Site Map 2018_09_09
Site Map 2018_09_10
Site Map 2018_09_11
Site Map 2018_09_12
Site Map 2018_09_13
Site Map 2018_09_14
Site Map 2018_09_15
Site Map 2018_09_16
Site Map 2018_09_17
Site Map 2018_09_18
Site Map 2018_09_19
Site Map 2018_09_20
Site Map 2018_09_21
Site Map 2018_09_23
Site Map 2018_09_24
Site Map 2018_09_25
Site Map 2018_09_26
Site Map 2018_09_27
Site Map 2018_09_28
Site Map 2018_09_29
Site Map 2018_09_30
Site Map 2018_10_01
Site Map 2018_10_02
Site Map 2018_10_03
Site Map 2018_10_04
Site Map 2018_10_05
Site Map 2018_10_06
Site Map 2018_10_07
Site Map 2018_10_08
Site Map 2018_10_09
Site Map 2018_10_10