Next Page: 10000

          Build a Java REST API with Java EE and OIDC      Cache   Translate Page      

Java EE allows you to build Java REST APIs quickly and easily with JAX-RS and JPA. Java EE is an umbrella standards specification that describes a number of Java technologies, including EJB, JPA, JAX-RS, and many others. It was originally designed to allow portability between Java application servers, and flourished in the early 2000s. Back then, application servers were all the rage and provided by many well-known companies such as IBM, BEA, and Sun. JBoss was a startup that disrupted the status quo and showed it was possible to develop a Java EE application server as an open source project, and give it away for free. JBoss was bought by RedHat in 2006.

In the early 2000s, Java developers used servlets and EJBs to develop their server applications. Hibernate and Spring came along in 2002 and 2004, respectively. Both technologies had a huge impact on Java developers everywhere, showing them it was possible to write distributed, robust applications without EJBs. Hibernate’s POJO model was eventually adopted as the JPA standard and heavily influenced EJB as well.

Fast forward to 2018, and Java EE certainly doesn’t look like it used to! Now, it’s mostly POJOs and annotations and far simpler to use.

Why Build a Java REST API with Java EE and Not Spring Boot?

Spring Boot is one of my favorite technologies in the Java ecosystem. It’s drastically reduced the configuration necessary in a Spring application and made it possible to whip up REST APIs in just a few lines of code. However, I’ve had a lot of API security questions lately from developers that aren’t using Spring Boot. Some of them aren’t even using Spring!

For this reason, I thought it’d be fun to build a Java REST API (using Java EE) that’s the same as a Spring Boot REST API I developed in the past. Namely, the “good-beers” API from my Bootiful Angular and Bootiful React posts.

Use Java EE to Build Your Java REST API

To begin, I asked my network on Twitter if any quickstarts existed for Java EE like start.spring.io. I received a few suggestions and started doing some research. David Blevins recommended I look at tomee-jaxrs-starter-project, so I started there. I also looked into the TomEE Maven Archetype, as recommended by Roberto Cortez.

I liked the jaxrs-starter project because it showed how to create a REST API with JAX-RS. The TomEE Maven archetype was helpful too, especially since it showed how to use JPA, H2, and JSF. I combined the two to create my own minimal starter that you can use to implement secure Java EE APIs on TomEE. You don’t have to use TomEE for these examples, but I haven’t tested them on other implementations.

If you get these examples working on other app servers, please let me know and I’ll update this blog post.

In these examples, I’ll be using Java 8 and Java EE 7.0 with TomEE 7.1.0. TomEE 7.x is the EE 7 compatible version; a TomEE 8.x branch exists for EE8 compatibility work, but there are no releases yet. I expect you to have Apache Maven installed too.

To begin, clone our Java EE REST API repository to your hard drive, and run it:

git clone https://github.com/oktadeveloper/okta-java-ee-rest-api-example.git javaee-rest-api
cd javaee-rest-api
mvn package tomee:run

Navigate to http://localhost:8080 and add a new beer.

Add beer

Click Add and you should see a success message.

Add beer success

Click View beers present to see the full list of beers.

Beers present

You can also view the list of good beers in the system at http://localhost:8080/good-beers. Below is the output when using HTTPie.

$ http :8080/good-beers
HTTP/1.1 200
Content-Type: application/json
Date: Wed, 29 Aug 2018 21:58:23 GMT
Server: Apache TomEE
Transfer-Encoding: chunked
[
    {
        "id": 101,
        "name": "Kentucky Brunch Brand Stout"
    },
    {
        "id": 102,
        "name": "Marshmallow Handjee"
    },
    {
        "id": 103,
        "name": "Barrel-Aged Abraxas"
    },
    {
        "id": 104,
        "name": "Heady Topper"
    },
    {
        "id": 108,
        "name": "White Rascal"
    }
]

Build a REST API with Java EE

I showed you what this application can do, but I haven’t talked about how it’s built. It has a few XML configuration files, but I’m going to skip over most of those. Here’s what the directory structure looks like:

$ tree .
.
├── LICENSE
├── README.md
├── pom.xml
└── src
    ├── main
    │   ├── java
    │   │   └── com
    │   │       └── okta
    │   │           └── developer
    │   │               ├── Beer.java
    │   │               ├── BeerBean.java
    │   │               ├── BeerResource.java
    │   │               ├── BeerService.java
    │   │               └── StartupBean.java
    │   ├── resources
    │   │   └── META-INF
    │   │       └── persistence.xml
    │   └── webapp
    │       ├── WEB-INF
    │       │   ├── beans.xml
    │       │   └── faces-config.xml
    │       ├── beer.xhtml
    │       ├── index.jsp
    │       └── result.xhtml
    └── test
        └── resources
            └── arquillian.xml

12 directories, 16 files

The most important XML files is the pom.xml that defines dependencies and allows you to run the TomEE Maven Plugin. It’s pretty short and sweet, with only one dependency and one plugin.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.okta.developer</groupId>
    <artifactId>java-ee-rest-api</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>war</packaging>
    <name>Java EE Webapp with JAX-RS API</name>
    <url>http://developer.okta.com</url>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <maven.compiler.target>1.8</maven.compiler.target>
        <maven.compiler.source>1.8</maven.compiler.source>
        <failOnMissingWebXml>false</failOnMissingWebXml>
        <javaee-api.version>7.0</javaee-api.version>
        <tomee.version>7.1.0</tomee.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>javax</groupId>
            <artifactId>javaee-api</artifactId>
            <version>${javaee-api.version}</version>
            <scope>provided</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.tomee.maven</groupId>
                <artifactId>tomee-maven-plugin</artifactId>
                <version>${tomee.version}</version>
                <configuration>
                    <context>ROOT</context>
                </configuration>
            </plugin>
        </plugins>
    </build>
</project>

The main entity is Beer.java.

package com.okta.developer;

import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;

@Entity
public class Beer {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private int id;
    private String name;

    public Beer() {}

    public Beer(String name) {
        this.name = name;
    }

    public int getId() {
        return id;
    }

    public void setId(int id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String beerName) {
        this.name = beerName;
    }

    @Override
    public String toString() {
        return "Beer{" +
                "id=" + id +
                ", name='" + name + '\'' +
                '}';
    }
}

The database (a.k.a., datasource) is configured in src/main/resources/META-INF/persistence.xml.

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence"
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
    <persistence-unit name="beer-pu" transaction-type="JTA">
        <jta-data-source>beerDatabase</jta-data-source>
        <class>com.okta.developer.Beer</class>
        <properties>
            <property name="openjpa.jdbc.SynchronizeMappings" value="buildSchema(ForeignKeys=true)"/>
        </properties>
    </persistence-unit>
</persistence>

The BeerService.java class handles reading and saving this entity to the database using JPA’s EntityManager.

package com.okta.developer;

import javax.ejb.Stateless;
import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;
import javax.persistence.Query;
import javax.persistence.criteria.CriteriaQuery;
import java.util.List;

@Stateless
public class BeerService {

    @PersistenceContext(unitName = "beer-pu")
    private EntityManager entityManager;

    public void addBeer(Beer beer) {
        entityManager.persist(beer);
    }

    public List<Beer> getAllBeers() {
        CriteriaQuery<Beer> cq = entityManager.getCriteriaBuilder().createQuery(Beer.class);
        cq.select(cq.from(Beer.class));
        return entityManager.createQuery(cq).getResultList();
    }

    public void clear() {
        Query removeAll = entityManager.createQuery("delete from Beer");
        removeAll.executeUpdate();
    }
}

There’s a StartupBean.java that handles populating the database on startup, and clearing it on shutdown.

package com.okta.developer;

import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import javax.ejb.Singleton;
import javax.ejb.Startup;
import javax.inject.Inject;
import java.util.stream.Stream;

@Singleton
@Startup
public class StartupBean {
    private final BeerService beerService;

    @Inject
    public StartupBean(BeerService beerService) {
        this.beerService = beerService;
    }

    @PostConstruct
    private void startup() {
        // Top beers from https://www.beeradvocate.com/lists/top/
        Stream.of("Kentucky Brunch Brand Stout", "Marshmallow Handjee", 
                "Barrel-Aged Abraxas", "Heady Topper",
                "Budweiser", "Coors Light", "PBR").forEach(name ->
                beerService.addBeer(new Beer(name))
        );
        beerService.getAllBeers().forEach(System.out::println);
    }

    @PreDestroy
    private void shutdown() {
        beerService.clear();
    }
}

These three classes make up the foundation of the app, plus there’s a BeerResource.java class that uses JAX-RS to expose the /good-beers endpoint.

package com.okta.developer;

import javax.ejb.Lock;
import javax.ejb.Singleton;
import javax.inject.Inject;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import java.util.List;
import java.util.stream.Collectors;

import static javax.ejb.LockType.READ;
import static javax.ws.rs.core.MediaType.APPLICATION_JSON;

@Lock(READ)
@Singleton
@Path("/good-beers")
public class BeerResource {
    private final BeerService beerService;

    @Inject
    public BeerResource(BeerService beerService) {
        this.beerService = beerService;
    }

    @GET
    @Produces({APPLICATION_JSON})
    public List<Beer> getGoodBeers() {
        return beerService.getAllBeers().stream()
                .filter(this::isGreat)
                .collect(Collectors.toList());
    }

    private boolean isGreat(Beer beer) {
        return !beer.getName().equals("Budweiser") &&
                !beer.getName().equals("Coors Light") &&
                !beer.getName().equals("PBR");
    }
}

Lastly, there’s a BeerBean.java class is used as a managed bean for JSF.

package com.okta.developer;

import javax.enterprise.context.RequestScoped;
import javax.inject.Inject;
import javax.inject.Named;
import java.util.List;

@Named
@RequestScoped
public class BeerBean {

    @Inject
    private BeerService beerService;
    private List<Beer> beersAvailable;
    private String name;

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public List<Beer> getBeersAvailable() {
        return beersAvailable;
    }

    public void setBeersAvailable(List<Beer> beersAvailable) {
        this.beersAvailable = beersAvailable;
    }

    public String fetchBeers() {
        beersAvailable = beerService.getAllBeers();
        return "success";
    }

    public String add() {
        Beer beer = new Beer();
        beer.setName(name);
        beerService.addBeer(beer);
        return "success";
    }
}

You now have a REST API built with Java EE! However, it’s not secure. In the following sections, I’ll show you how to secure it using Okta’s JWT Verifier for Java, Spring Security, and Pac4j.

Add OIDC Security with Okta to Your Java REST API

You will need to create an OIDC Application in Okta to verify the security configurations you’re about to implement work. To make this effortless, you can use Okta’s API for OIDC. At Okta, our goal is to make identity management a lot easier, more secure, and more scalable than what you’re used to. Okta is a cloud service that allows developers to create, edit, and securely store user accounts and user account data, and connect them with one or multiple applications. Our API enables you to:

Are you sold? Register for a forever-free developer account today! When you’re finished, complete the steps below to create an OIDC app.

  1. Log in to your developer account on developer.okta.com.
  2. Navigate to Applications and click on Add Application.
  3. Select Web and click Next.
  4. Give the application a name (.e.g., Java EE Secure API) and add the following as Login redirect URIs:
    • http://localhost:3000/implicit/callback
    • http://localhost:8080/login/oauth2/code/okta
    • http://localhost:8080/callback?client_name=OidcClient
  5. Click Done, then edit the project and enable “Implicit (Hybrid)” as a grant type (allow ID and access tokens) and click Save.

Protect Your Java REST API with JWT Verifier

To validate JWTs from Okta, you’ll need to add Okta JWT Verifier for Java to your pom.xml.

<properties>
    ...
    <okta-jwt.version>0.3.0</okta-jwt.version>
</properties>

<dependencies>
    ...
    <dependency>
        <groupId>com.okta.jwt</groupId>
        <artifactId>okta-jwt-verifier</artifactId>
        <version>${okta-jwt.version}</version>
    </dependency>
</dependencies>

Then create a JwtFilter.java (in the src/main/java/com/okta/developer directory). This filter looks for an authorization header with an access token in it. If it exists, it validates it and prints out the user’s sub, a.k.a. their email address. If it doesn’t exist, or is in valid, an access denied status is returned.

Make sure to replace {yourOktaDomain} and {clientId} with the settings from the app you created.

package com.okta.developer;

import com.nimbusds.oauth2.sdk.ParseException;
import com.okta.jwt.JoseException;
import com.okta.jwt.Jwt;
import com.okta.jwt.JwtHelper;
import com.okta.jwt.JwtVerifier;

import javax.servlet.*;
import javax.servlet.annotation.WebFilter;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;

@WebFilter(filterName = "jwtFilter", urlPatterns = "/*")
public class JwtFilter implements Filter {
    private JwtVerifier jwtVerifier;

    @Override
    public void init(FilterConfig filterConfig) {
        try {
            jwtVerifier = new JwtHelper()
                    .setIssuerUrl("/oauth2/default")
                    .setClientId("{yourClientId}")
                    .build();
        } catch (IOException | ParseException e) {
            System.err.print("Configuring JWT Verifier failed!");
            e.printStackTrace();
        }
    }

    @Override
    public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse,
                         FilterChain chain) throws IOException, ServletException {
        HttpServletRequest request = (HttpServletRequest) servletRequest;
        HttpServletResponse response = (HttpServletResponse) servletResponse;
        System.out.println("In JwtFilter, path: " + request.getRequestURI());

        // Get access token from authorization header
        String authHeader = request.getHeader("authorization");
        if (authHeader == null) {
            response.sendError(HttpServletResponse.SC_UNAUTHORIZED, "Access denied.");
            return;
        } else {
            String accessToken = authHeader.substring(authHeader.indexOf("Bearer ") + 7);
            try {
                Jwt jwt = jwtVerifier.decodeAccessToken(accessToken);
                System.out.println("Hello, " + jwt.getClaims().get("sub"));
            } catch (JoseException e) {
                e.printStackTrace();
                response.sendError(HttpServletResponse.SC_UNAUTHORIZED, "Access denied.");
                return;
            }
        }

        chain.doFilter(request, response);
    }

    @Override
    public void destroy() {
    }
}

To ensure this filter is working, restart your app and run:

mvn package tomee:run

If you navigate to http://localhost:8080/good-beers in your browser, you’ll see an access denied error.

blog/javaee-rest-api/tomee-401.png

To prove it works with a valid JWT, you can clone my Bootiful React project, and run its UI:

git clone -b okta https://github.com/oktadeveloper/spring-boot-react-example.git bootiful-react
cd bootiful-react/client
npm install

Edit this project’s client/src/App.tsx file and change the issuer and clientId to match your application.

const config = {
  issuer: '/oauth2/default',
  redirectUri: window.location.origin + '/implicit/callback',
  clientId: '{yourClientId}'
};

Then start it:

npm start

You should then be able to login at http://localhost:3000 with the credentials you created your account with. However, you won’t be able to load any beers from the API because of a CORS error (in your browser’s developer console).

Failed to load http://localhost:8080/good-beers: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:3000' is therefore not allowed access.

TIP: If you see a 401 and no CORS error, it likely means your client IDs don’t match.

To fix this CORS error, add a CorsFilter.java alongside your JwtFilter.java class. The filter below will allow an OPTIONS request, and send access-control headers back that allow any origin, GET methods, and any headers. I recommend you to make these settings a bit more specific in production.

package com.okta.developer;

import javax.servlet.*;
import javax.servlet.annotation.WebFilter;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;

@WebFilter(filterName = "corsFilter")
public class CorsFilter implements Filter {

    @Override
    public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain chain)
            throws IOException, ServletException {
        HttpServletRequest request = (HttpServletRequest) servletRequest;
        HttpServletResponse response = (HttpServletResponse) servletResponse;
        System.out.println("In CorsFilter, method: " + request.getMethod());

        // Authorize (allow) all domains to consume the content
        response.addHeader("Access-Control-Allow-Origin", "http://localhost:3000");
        response.addHeader("Access-Control-Allow-Methods", "GET");
        response.addHeader("Access-Control-Allow-Headers", "*");

        // For HTTP OPTIONS verb/method reply with ACCEPTED status code -- per CORS handshake
        if (request.getMethod().equals("OPTIONS")) {
            response.setStatus(HttpServletResponse.SC_ACCEPTED);
            return;
        }

        // pass the request along the filter chain
        chain.doFilter(request, response);
    }

    @Override
    public void init(FilterConfig config) {
    }

    @Override
    public void destroy() {
    }
}

Both of the filters you’ve added use @WebFilter to register themselves. This is a convenient annotation, but it doesn’t provide any filter ordering capabilities. To workaround this missing feature, modify JwtFilter so it doesn’t have a urlPattern in its @WebFilter.

@WebFilter(filterName = "jwtFilter")

Then create a src/main/webapp/WEB-INF/web.xml file and populate it with the following XML. These filter mappings ensure the CorsFilter is processed first.

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="3.1"
         xmlns="http://xmlns.jcp.org/xml/ns/javaee"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd">

    <filter-mapping>
        <filter-name>corsFilter</filter-name>
        <url-pattern>/*</url-pattern>
    </filter-mapping>

    <filter-mapping>
        <filter-name>jwtFilter</filter-name>
        <url-pattern>/*</url-pattern>
    </filter-mapping>
</web-app>

Restart your Java API and now everything should work!

Beer List in React UI

In your console, you should see messages similar to mine:

In CorsFilter, method: OPTIONS
In CorsFilter, method: GET
In JwtFilter, path: /good-beers
Hello, demo@okta.com

Using a filter with Okta’s JWT Verifier is an easy way to implement a resource server (in OAuth 2.0 nomenclature). However, it doesn’t provide you with any information about the user. The JwtVerifier interface does have a decodeIdToken(String idToken, String nonce) method, but you’d have to pass the ID token in from your client to use it.

In the next two sections, I’ll show you how you can use Spring Security and Pac4j to implement similar security. As a bonus, I’ll show you how to prompt the user to login (when they try to access the API directly) and get the user’s information.

Secure Your Java REST API with Spring Security

Spring Security is one of my favorite frameworks in Javaland. Most of the examples on this blog use Spring Boot when showing how to use Spring Security. I’m going to use the latest version – 5.1.0.RC2 – so this tutorial stays up to date for a few months.

Revert your changes to add JWT Verifier, or simply delete web.xml to continue.

Modify your pom.xml to have the necessary dependencies for Spring Security. You’ll also need to add Spring’s snapshot repositories to get the release candidate.

<properties>
    ...
    <spring-security.version>5.1.0.RC2</spring-security.version>
    <spring.version>5.1.0.RC3</spring.version>
    <jackson.version>2.9.6</jackson.version>
</properties>

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-framework-bom</artifactId>
            <version>${spring.version}</version>
            <scope>import</scope>
            <type>pom</type>
        </dependency>
        <dependency>
            <groupId>org.springframework.security</groupId>
            <artifactId>spring-security-bom</artifactId>
            <version>${spring-security.version}</version>
            <scope>import</scope>
            <type>pom</type>
        </dependency>
    </dependencies>
</dependencyManagement>

<dependencies>
    ...
    <dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-webmvc</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.security</groupId>
        <artifactId>spring-security-web</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.security</groupId>
        <artifactId>spring-security-config</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework
          [Serie] Better Call Saul      Cache   Translate Page      
Replies: 1459 Last poster: Apache4u at 13-09-2018 00:26 Topic is Open Inderdaad de omslag naar Saul gezien. En Huell natuurlijk.
          Linux Administrator - Integricon Solutions Inc - Toronto, ON      Cache   Translate Page      
Linux administration experience. Strong Enterprise Linux Experience such as Redhat Enterprise Linux. Apache, TomCat, Linux Redhat ES, VMWare ESX on HP Proliant....
From Integricon Solutions Inc. - Tue, 11 Sep 2018 09:25:10 GMT - View all Toronto, ON jobs
          Technical Writer II - Alexa - Amazon.com - Seattle, WA      Cache   Translate Page      
Frameworks and Infrastructure with tools like Apache MXNet and TensorFlow, API-driven Services like Amazon Lex, Amazon Polly and Amazon Rekognition to quickly...
From Amazon.com - Mon, 13 Aug 2018 19:22:01 GMT - View all Seattle, WA jobs
          Kafka Admin      Cache   Translate Page      
TX-Irving, We are seeking an Apache Kafka Administrator. The ideal candidate with have strong knowledge and experience with the installation and configuration of Kafka brokers, Kafka Managers. Experience with clustering and high availability configurations. Primary Responsibilities: Hands-on experience in standing up and administering on premise Kafka platform. Experience in Kafka brokers, zookeepers, Kafka
          Azure Powerful Debugging Tool for Spark for HDInsight!      Cache   Translate Page      
Azure Powerful Debugging Tool for Spark for HDInsight!

Recently Microsoft have announced updates for their debugging tools for the HDInsight feature. Many businesses currently use Microsoft Cosmos which runs millions of jobs constantly which made being able to scale or manage jobs being a huge challenge. To combat this Microsoft have announced the early access of the Apache Spark Debugging Toolset for HDInsight [...]

Continue reading Azure Powerful Debugging Tool for Spark for HDInsight! at risual.


          Liferay / Java Developer for Birmingham, AL Location      Cache   Translate Page      
AL-BIRMINGHAM, Position : Liferay / Java Developer Location : Birmingham, AL Job type : 6+ Months Contract Interview Mode : Telephonic + Skype Required Experience/ Skills: Primary Skill: Liferay Portal, Java / J2EE 4+ years of experience in Java / J2EE technologies 2+ years of experience in Liferay Enterprise Portal. Experience with Apache, Tomcat server and knowledge of UNIX. PL/SQL skills and good knowledge of
          Apache Maven Shade Plugin Version 3.2.0      Cache   Translate Page      

The Apache Maven team is pleased to announce the release of the Apache Maven Shade Plugin, version 3.2.0.

This plugin provides the capability to package the artifact in an uber-jar, including its dependencies and to shade – i.e. rename – the packages of some of the dependencies.

You should specify the version in your project’s plugin configuration:

1
2
3
4
5
<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-shade-plugin</artifactId>
  <version>3.2.0</version>
</plugin>

You can download the appropriate sources etc. from the download page.

You can download the appropriate sources etc. from the download page.

Release Notes – Maven Shade Plugin – Version 3.2.0

Bug:

  • MSHADE-289 – Maven Shade Plugin does not work under Java 10

Improvement:

Dependency upgrades:

  • MSHADE-294 – Upgrade maven-plugins parent to version 32
  • MSHADE-296 – Upgrade maven-artifact-transfer 0.10.0

Enjoy,

-The Apache Maven team


          少数チームで自社WEBサービスを開発したいフロントエンドエンジニアを募集! by 株式会社エグゼック      Cache   Translate Page      
当社のお客様である全国の写真館への導入実績が順調に増え、今後更なる広がりが見込まれる中、新たな仲間を募集します。 ◆業務内容 ・自社サービスである「フォトストア」のバージョンアップ ・新機能開発 ・周辺に位置付く新サービスの開発 ・新規事業開発など 上記に関わるフロントエンドを担当していただきます。 ◆開発環境 言語      :HTML5+CSS3・JavaScript フレームワーク :自社フレームワーク・.NET Framework・jQuery・prototype.js 環境      :Linux、Cent OS、Apache、memcached、Eclipse データベース  :MySQL、PostgreSQL、Microsoft SQL Server プロジェクト管理:Redmine 自社プロダクトであるフォトストアは、写真館がクライアントなります。 そのお客様の声から得たニーズや、年に1回行う約10,000名からのアンケート結果などを元に、 継続的に新規機能開発、機能改善などの開発に取り組んで頂きます。  開発案件は営業側から来る要望(顧客の声で優先すべきもの)がRedmineでチケット登録され、開発チームと営業側とで開発を行うか行わないか、課題を解決する最適な機能など検討を重ねた上で、開発するチケットを決定し、各開発担当者に割り振られていきます。 各開発担当者は、仕様確認・設計・工数を確認しながらコーディングに入り、コーディングを終えれば、GitHub 上で管理者にプルリクを出し、レビューが通ればリリースされていきます。 ■開発チーム 役割は多少分けているものの、インフラもみたり、フロントも書いたり、サーバーも書いたりと、幅広く担当できます。フルスタックエンジニアを目指す方にはぴったりです。 ■環境について 技術に対する考え方は、まず最新動向はきっちり仕入れよう。仕入れた上で何がいいか考えよう。最も適する技術を採用しよう。古いか新しいかは関係ない。そんな感じです。ただ、今開発に追われていて、やりたくても手をつけられていないことも多く、これらの考えを共有しながら一緒に仕事ができる方が理想的です。 また、代表の古田がエンジニアということもあり、たまに代表といっしょにプログラミングを組んだり、技術ネタで議論したり、ビジネスという切り口からエンジニアとしてどうサービスを作りあげていくかという議論ができる点などなど、これも当社の特徴の1つかも知れません。 ■今後の直近の課題 これからの開発メンバー増強にともない、チームで開発する体制を整備して、スピーディに短周期で機能改善を回していくことと考えています。なので、これから当社のエンジニアチームのカタチを作っていくというフェーズなので、それを楽しめる方には最適です。 ■こんな方求めてます! 成長中のベンチャー企業で、将来会社が目指しているビジョンに向かって一緒に情熱を持って、いっしょに楽しみながら互いに成長ができる仲間を求めています。 条件としては、一生懸命なこと、成長したい気持ちがあること、ふざけるけど根は真面目なこと、そして仲間を思いやれる人、そして技術にワクワク感を感じ、長くいっしょに働けること。 まずは、どんな雰囲気なのか、ぜひコーヒーでも飲みにきてください。 アットホームな、でもゆるゆるではなく、ちょっと厳しい、個性的で、なんだかんだ真面目なメンバーの集まり。そんな感じがEXECかなと思います!
          This is the Terrifying Reality of America's 'Deadly, Dangerous and Destabilizing Role' in Global Arms Supply      Cache   Translate Page      
Want to be part of peace? Stop working for, and investing in, these Big 5 arms manufacturers.

Most Americans want peace—in the world, in their country, and in their own homes and communities.

Even the most committed Trump supporter might want peace in Latin America and the Middle East—if it means “illegal” refugees stop “pouring” in to the USA.

Yet, somehow, U.S. “commandos” were deployed to 149 countries in 2017 alone.

Worse, according to a 2016 International Criminal Court report, U.S. troops allegedly committed war crimes in Afghanistan, and the U.S. military and CIA allegedly tortured at least 88 people in Poland, Romania, and other countries as well.

So why is it then that so many Americans work for private, for-profit companies profiting from war? And why are so many Americans invested financially in the merchants of death profiting from war and manufactured terror around the globe?

Perhaps it’s out of simple ignorance of a) not knowing where their money is invested and b) not knowing the U.S. companies that are manufacturing the weapons of death used by the U.S. and their frenemies around the world.

Enter CodePink’s new report, War Profiteers: The U.S. War Machine and the Arming of Repressive Regimes, a handy guide to the companies that manufacture weapons, and who they are selling them to. The report is primarily focused on Boeing, General Dynamics, Lockheed Martin, Northrop Grumman, and Raytheon—the largest military manufacturers in the U.S., and their deals with Saudi Arabia, Israel, and Egypt.

“The United States is the leading purveyor of arms sales, global war and militarism,” the report notes. “With 800 military bases in 80 countries around the world, the U.S. has a larger military budget than the next seven countries combined, as well as an arms industry that dominates the global arms trade.”

The report delves into each of the Big Five arms manufacturers and their role and involvement in the murders of civilians and children around the world. It seems what are heralded as symbols of freedom in the U.S. are viewed as symbols of absolute destruction and death to the children who bear witness to the violence inflicted upon their communities by, for example, Apache helicopters or Eagle fighters (Boeing-made combat aircraft supplied to Israel).

Of course, it doesn’t help when U.S. leaders make threats and brag about their military or economic power and their access to “the button.” For example, the ICC’s recent accusations inspired national security adviser John Bolton to threaten the ICC as “illegitimate” and that the U.S. “will respond against the ICC and its personnel.”

Current U.S. leadership actions weren’t without foreshadowing by America’s military leaders of ages past. “Those who measure security solely in terms of offensive capacity distort its meaning and mislead those who pay them heed,” President Dwight D. Eisenhower declared decades ago. “No modern nation has ever equaled the crushing offensive power attained by the German war machine in 1939,” he continued. “No modern nation was broken and smashed as was Germany six years later.”

It’s not just the arrogant language leaders use, but also the corruption of language. The report notes that, “Officially sanctioned terms like ‘defense’ and ‘security’ act as a subterfuge to diminish and camouflage the deadly, dangerous and destabilizing role that the United States is playing in the world.”

The cost of that coopting and corruption of language and perversion of justice and humanity resonates deeply throughout the world, on an emotional, economic, and even spiritual level. Even Eisenhower, the report also notes, said, “The total influence—economic, political, even spiritual—is felt in every city, every State House, every office of the Federal government. Yet we must not fail to comprehend its grave implications. Our toil, resources, and livelihood are all involved; so is the very structure of our society.”

So why is it, then, that so many Americans continue to work for the companies profiting from heinous wars? It probably isn’t Raytheon’s corny “Forget Silicon Valley: Working for a defense contractor is surprisingly cool” campaign that is attracting people to military subcontractors.

The numbers of people working in this sector—nearly half a million officially in the Big Five mentioned in the report—is troublesome considering the number of EPA employees who have left under the Trump administration: an “exodus” of over 1,600 employees who are refusing to be a part of the problem rather than the solution.

As Eisenhower said, “Disarmament, with mutual honor and confidence, is a continuing imperative. Together we must learn how to compose differences, not with arms, but with intellect and decent purpose.” That can start by moving your money—and your employment—to a different sector. But first, arm yourself with knowledge.

You can access the full CodePink report here.

This article was produced by Local Peace Economy, a project of the Independent Media Institute.

 

Related Stories


          Arcsight Delivery Quality Assurance Resource Engineer, Network Security at Ecscorp Resources      Cache   Translate Page      
Ecscorp Resources is a solution engineering firm, established in the year 2001 with a cumulative of over 100 years experience. Our business is driven by passion and the spirit of friendliness; we harness the power of creativity and technology to drive innovation and deliver cutting-edge solutions to increase productivity. Our passion, experience, expertise and shared knowledge have forged us into a formidable catalyst for desirable, sustainable change and incessant growth. We strive to provide achievable solutions that efficiently and measurably support goal-focused business priorities and objectives.Duration: 3 months Detailed Description ArcSight division, is a leading global provider of Compliance and Security Management solutions that protect enterprises, education and governmental agencies. ArcSight helps customers comply with corporate and regulatory policy, safeguard their assets and processes and control risk. The ArcSight platform collects and correlates user activity and event data across the enterprise so that businesses can rapidly identify, prioritize and respond to compliance violations, policy breaches, cybersecurity attacks, and insider threats. The successful candidate for this position will work on the ArcSight R&D team. This is a hands-on position that will require the candidate to work with data collected from various network devices in combination with the various ArcSight product lines in order to deliver content that will help address the needs of all of ArcSight's customers. The ideal candidate will have a good understanding of enterprise security coupled with hands-on networking and security skills as well as an ability to write and understand scripting languages such as Perl, Python. Research, analyze and understand log sources, particularly from various devices in an enterprise network Appropriately categorize the security messages generated by various sources into the multi-dimensional ArcSight Normalization schema Write and modify scripts to parse out messages and interface with the ArcSight categorization database Work on content and vulnerability update releases Write scripts and automation to optimize various processes involved Understand content for ArcSight ESM, including correlation rules, dashboards, reports, visualizations, etc. Understand requirements to write content to address use cases based on customer requests and feedback Assist in building comprehensive, correct and useful ArcSight Connector and ESM content to ArcSight customers on schedule. Requirements Excellent knowledge of IT operations, administration and security Hands-on experience of a variety of different networking and security devices, such as Firewalls, Routers, IDS/IPS etc. Ability to examine operational and security logs generated by networking and security devices, identify the meaning and severity of them Understand different logging mechanisms, standards and formats Very strong practical Linux-based and Windows-based system administration skills Strong scripting skills using languages (Shell, Perl, Python etc), and Regex Hands-on experience of database such as MySQL Knowledge of Security Information Management solution such as ArcSight ESM Experience with a version control system (Perforce, GitHub) Advanced experience with Microsoft Excel Excellent written and verbal communication skills Must possess ability and desire to learn new technologies quickly while remaining detailed oriented Strong analytical skill and problem solving skills, multi-tasking. Pluses: Network device or Security certification (CISSP, CEH etc) Experience with application server such as Apache Tomcat Work experience in security operation center (SOC).
          LXer: How To Install and Configure Nextcloud with Apache on Ubuntu 18.04      Cache   Translate Page      
Published at LXer: Nextcloud is an open source, self-hosted file share and collaboration platform, similar to Dropbox. In this tutorial we'll show you how to install and configure Nextcloud with...
          VirtualHostX 8.1.0 – Host multiple websites on your Mac.      Cache   Translate Page      
VirtualHostX is the easiest way to host and share multiple websites on your Mac. It’s the perfect solution for web designers working on more than one project at a time. With VirtualHostX you can easily create and manage unlimited Apache websites with just a few clicks. New in Version 7.0 Built on-top of the power […]
          Situacion sobre las areas de concesión , EDIs y material rodante.      Cache   Translate Page      
apache23








          文章: Apache Beam实战指南 | 玩转KafkaIO与Flink      Cache   Translate Page      

本文是Apache Beam实战指南系列文章的第二篇内容,将重点介绍 Apache Beam与Flink的关系,对Beam框架中的KafkaIO和Flink源码进行剖析,并结合应用示例和代码解读带你进一步了解如何结合Beam玩转Kafka和Flink。

By 张海涛
          This is the Terrifying Reality of America's 'Deadly, Dangerous and Destabilizing Role' in Global Arms Supply      Cache   Translate Page      
Want to be part of peace? Stop working for, and investing in, these Big 5 arms manufacturers.

Most Americans want peace—in the world, in their country, and in their own homes and communities.

Even the most committed Trump supporter might want peace in Latin America and the Middle East—if it means “illegal” refugees stop “pouring” in to the USA.

Yet, somehow, U.S. “commandos” were deployed to 149 countries in 2017 alone.

Worse, according to a 2016 International Criminal Court report, U.S. troops allegedly committed war crimes in Afghanistan, and the U.S. military and CIA allegedly tortured at least 88 people in Poland, Romania, and other countries as well.

So why is it then that so many Americans work for private, for-profit companies profiting from war? And why are so many Americans invested financially in the merchants of death profiting from war and manufactured terror around the globe?

Perhaps it’s out of simple ignorance of a) not knowing where their money is invested and b) not knowing the U.S. companies that are manufacturing the weapons of death used by the U.S. and their frenemies around the world.

Enter CodePink’s new report, War Profiteers: The U.S. War Machine and the Arming of Repressive Regimes, a handy guide to the companies that manufacture weapons, and who they are selling them to. The report is primarily focused on Boeing, General Dynamics, Lockheed Martin, Northrop Grumman, and Raytheon—the largest military manufacturers in the U.S., and their deals with Saudi Arabia, Israel, and Egypt.

“The United States is the leading purveyor of arms sales, global war and militarism,” the report notes. “With 800 military bases in 80 countries around the world, the U.S. has a larger military budget than the next seven countries combined, as well as an arms industry that dominates the global arms trade.”

The report delves into each of the Big Five arms manufacturers and their role and involvement in the murders of civilians and children around the world. It seems what are heralded as symbols of freedom in the U.S. are viewed as symbols of absolute destruction and death to the children who bear witness to the violence inflicted upon their communities by, for example, Apache helicopters or Eagle fighters (Boeing-made combat aircraft supplied to Israel).

Of course, it doesn’t help when U.S. leaders make threats and brag about their military or economic power and their access to “the button.” For example, the ICC’s recent accusations inspired national security adviser John Bolton to threaten the ICC as “illegitimate” and that the U.S. “will respond against the ICC and its personnel.”

Current U.S. leadership actions weren’t without foreshadowing by America’s military leaders of ages past. “Those who measure security solely in terms of offensive capacity distort its meaning and mislead those who pay them heed,” President Dwight D. Eisenhower declared decades ago. “No modern nation has ever equaled the crushing offensive power attained by the German war machine in 1939,” he continued. “No modern nation was broken and smashed as was Germany six years later.”

It’s not just the arrogant language leaders use, but also the corruption of language. The report notes that, “Officially sanctioned terms like ‘defense’ and ‘security’ act as a subterfuge to diminish and camouflage the deadly, dangerous and destabilizing role that the United States is playing in the world.”

The cost of that coopting and corruption of language and perversion of justice and humanity resonates deeply throughout the world, on an emotional, economic, and even spiritual level. Even Eisenhower, the report also notes, said, “The total influence—economic, political, even spiritual—is felt in every city, every State House, every office of the Federal government. Yet we must not fail to comprehend its grave implications. Our toil, resources, and livelihood are all involved; so is the very structure of our society.”

So why is it, then, that so many Americans continue to work for the companies profiting from heinous wars? It probably isn’t Raytheon’s corny “Forget Silicon Valley: Working for a defense contractor is surprisingly cool” campaign that is attracting people to military subcontractors.

The numbers of people working in this sector—nearly half a million officially in the Big Five mentioned in the report—is troublesome considering the number of EPA employees who have left under the Trump administration: an “exodus” of over 1,600 employees who are refusing to be a part of the problem rather than the solution.

As Eisenhower said, “Disarmament, with mutual honor and confidence, is a continuing imperative. Together we must learn how to compose differences, not with arms, but with intellect and decent purpose.” That can start by moving your money—and your employment—to a different sector. But first, arm yourself with knowledge.

You can access the full CodePink report here.

This article was produced by Local Peace Economy, a project of the Independent Media Institute.


          Senior Developer - The Guarantee Company of North America - Cambridge, ON      Cache   Translate Page      
Familiarity with Apache Tomcat, WebSphere or WebSphere J2EE containers. About The Guarantee Company of North America....
From The Guarantee Company of North America - Wed, 22 Aug 2018 16:19:56 GMT - View all Cambridge, ON jobs
          Application Developer - The Guarantee Company of North America - Cambridge, ON      Cache   Translate Page      
Familiarity with Apache Tomcat, WebSphere or WebSphere J2EE containers. About The Guarantee Company of North America....
From The Guarantee Company of North America - Mon, 20 Aug 2018 22:23:02 GMT - View all Cambridge, ON jobs
          Running Apache Cassandra on Kubernetes      Cache   Translate Page      

As Kubernetes becomes the de facto solution for container orchestration, more and more developers (and enterprises) want to run Apache Cassandra databases on Kubernetes . It's easy to get started―especially given the capabilities that Kubernetes' StatefulSets bring to the table. Kubernetes, though, certainly has room to improve when it comes storing data in-state and understanding how different databases work.

For example, Kubernetes doesn't know if you're writing to a leader or a follower database, or to a multi-sharded leader infrastructure, or to a single database instance. StatefulSets―workload API objects used to manage stateful applications―offer the building blocks required for stable, unique network identifiers; stable persistent storage; ordered and smooth deployment and scaling, deletion, and termination; and automated rolling updates. However, while getting started with Cassandra on Kubernetes might be easy, it can still be a challenge to run and manage.

To overcome some of these hurdles, we decided to build an open source Cassandra operator that runs and operates Cassandra within Kubernetes; you can think of it as Cassandra-as-a-Service on top of Kubernetes. We've made this Cassandra operator open source and freely available on GitHub. It remains a work in progress by our Instaclustr team and our partner contributors―but it is functional and ready for use. The Cassandra operator supports Docker images, which are open source and also available from the project's GitHub repository.

More on Kubernetes

What is Kubernetes? How to make your first upstream Kubernetes contribution Getting started with Kubernetes Automated provisioning in Kubernetes Test drive OpenShift hands-on

While it's possible for developers to build scripts for managing and running Cassandra on Kubernetes, the Cassandra operator offers the advantage of providing the same consistent, reproducible environment, as well as the same consistent, reproducible set of operations through different production clusters. (This is true across development, staging, and QA environments.) Also, because best practices are already built into the operator, development teams are spared operational concerns and can focus on their core capabilities.

What is a Kubernetes operator?

A Kubernetes operator consists of two components: a controller and a custom resource definition (CRD). The CRD allows devs to create Cassandra objects in Kubernetes. It's an extension of Kubernetes that allows us to define custom objects or resources using Kubernetes that our controller can then listen to for any changes to the resource definition. Devs can define an object in Kubernetes that contains configuration options for Cassandra, such as cluster name, node count, JVM tuning options, etc.―all the information you want to give Kubernetes about how to deploy Cassandra.

You can isolate the Cassandra operator to a specific Kubernetes namespace, define what kinds of persistent volumes it should use, and more. The Cassandra operator's controller listens to state changes on the Cassandra CRD and will create its own StatefulSets to match those requirements. It will also manage those operations and can ensure repairs, backups, and safe scaling as specified via the CRD. In this way, it leverages the Kubernetes concept of building controllers upon other controllers in order to achieve intelligent and helpful behaviors.

So, how does it work? cassandra-operator-architecture.png
Running Apache Cassandra on Kubernetes

Architecturally, the Cassandra controller connects to the Kubernetes Master. It listens to state changes and manipulates pod definitions and CRDs. It then deploys them, waits for changes to occur, and repeats until all necessary changescomplete fully.

The Cassandra controller can, of course, perform operations within the Cassandra cluster. For example, want to scale down your Cassandra cluster? Instead of manipulating the StatefulSet to handle this task, the controller will see the CRD change. The node count will change to a lower number (say from six to five). The controller will get that state change, and it will first run a decommission operation on the Cassandra node that will be removed. This ensures that the Cassandra node stops gracefully and redistributes and rebalances the data it holds across the remaining nodes. Once the Cassandra controller sees this has happened successfully, it will modify that StatefulSet definition to allow Kubernetes to decommission that pod. Thus, the Cassandra controller brings needed intelligence to the Kubernetes environment to run Cassandra properly and ensure smoother operations.

As we continue this project and iterate on the Cassandra operator, our goal is to add new components that will continue to expand the tool's features and value. A good example is Cassandra SideCar (shown in the diagram above), which can take responsibility for tasks like backups and repairs. Current and future features of the project can be viewed on GitHub . Our goal for the Cassandra operator is to give devs a powerful, open source option for running Cassandra on Kubernetes with a simplicity and grace that has not yet been all that easy to achieve.

Topics

Kubernetes

About the author
Running Apache Cassandra on Kubernetes
Ben Bromhead

Ben Bromhead is Chief Technology Officer and Co-Founder at Instaclustr , an open source-as-a-service company. Ben is located in Instaclustr's California office and is active in the Apache Cassandra community. Prior to Instaclustr, Ben had been working as an independent consultant developing NoSQL solutions for enterprises, and he ran a high-tech cryptographic and cyber security formal testing laboratory at BAE Systems and Stratsec.

More about me

Learn how you can contribute
          学习 HDFS(四):高可用      Cache   Translate Page      
架构

HDFS 采用了主从(Master/Slave)架构,就不可避免的要面对单点失效(SPOF,Single Point of Failure)的问题。Hadoop 2.X 之后,提供了对高可用(HA)的支持,架构如下所示:


学习 HDFS(四):高可用
主备切换

在高可用 HDFS 集群中,存在多个 NameNode。其中,有且只有一个 NameNode 是 Active 状态,其它 NameNode 是 Standby 状态。只有 Active 状态的 NameNode 提供服务。如果多个 NameNode 同时提供服务,会产生脑裂(Split Brain)的情况,从而增加了维护数据一致性的成本。

当 Active NameNode 发生故障,Standby NameNode 会变成 Active NameNode 继续提供服务,实现主备切换。

数据同步

Active NameNode 和 Standby NameNode 数据同步,通过 QJM(Quorum Journal Manager)实现。

为了理解 QJM 数据同步的原理,需要先理解 Hadoop 检查点(Checkpoint)机制。检查点机制用于从故障或重启快速恢复数据。

HDFS 元数据包含两种文件:

命名空间镜像 fsimage 文件,包括文件系统目录树、文件/目录信息和文件件的数据块索引,位置在 dfs.namenode.name.dir 目录下; 编辑日志 edits 文件,位置在 dfs.namenode.edits.dir 目录下。

旧的 fsimgage 通过重放 edits 编辑日志,生成新的 fsimage 。


学习 HDFS(四):高可用

集群中的 JournalNode,通常是至少3个节点,作用相当于共享存储。Active NameNode 向 JournalNode 写编辑日志数据,Standby NameNode 从 JournalNode 读编辑日志数据,从而实现了数据同步。

故障检测

在高可用 HDFS 集群中,ZKFC(Zookeeper Failover Controller)用于监控 NameNode,一个 Failover Controller 监控一个 NameNode,当 Active NameNode 不可用时,触发自动故障恢复。

参考 A Guide to Checkpointing in Hadoop - Cloudera Engineering Blog Apache Hadoop - HDFS High Availability Using the Quorum Journal Manager Hadoop NameNode 高可用(High Availability)实现解析
          Apache Ambari Views: Configuring & Creating View Instances      Cache   Translate Page      
1. Objective Apache Ambari Views

In this Ambari tutorial , we will learn the whole concept of Apache Ambari Views. Also, we will see how to configure View Instances along with the way to create View Instances in Ambari.

So, let’s begin Ambari views Tutorial.


Apache Ambari Views: Configuring &amp; Creating View Instances

Apache Ambari Views: Configuring & Creating View Instances

2. What are the Ambari Views?

A Framework that enables developers to create UI components, or Views, that “plug into” the Ambari Web interface is what we call Apache Ambari Views. However, automatically Ambari creates and presents some instances of Views to users when the service used by that View is added to the cluster. As an example, the YARN Queue Manager View displays to Ambari web users, if Apache YARN service is added to the cluster. Although, the Ambari Admin user must manually create a view instance, in other cases.

Have a look at Ambari Security Guide

In order to extend and customize the Ambari web, views enable us and also they help us to meet our specific needs.

Moreover, to permit third parties to plug in new resource types, Ambari Views extends our implementation, along with APIs, providers, as well as UIs to support them. Basically, Views are deployed on the Ambari Servers that enables Ambari Admins to create View instances. Also, it helps to set access privileges for users as well as groups.

3. Configuring View Instances in Ambari

Generally, we specify some basic configuration information about the view and also we configure the view to communicate with a cluster when we create a View instance. Now, when completing the Cluster Configuration section, choose one of three options on the basis of resources managed by our Ambari Server, like Local Cluster, Remote Cluster, or Custom.

Let’s learn about Ambari Views Instances in detail:

a. Local Cluster

We can select Local Cluster if we are configuring a view instance in an Ambari Server that is also managing a cluster. Basically, Ambari automatically determines the cluster configuration properties required, when we select this option.

b. Remote Cluster

However, we must select either Remote Cluster or Custom, if our Ambari Server is not managing a cluster.

Let’s revise Ambari Groups and Users

Well, we should select Remote Cluster, when we plan to configure a view to working with a cluster which is remote from an Ambari Server and that cluster is being managed by Ambari.

So, basically, on registering a Remote Cluster, it enables the Remote Cluster option. Moreover, it automatically determines the cluster configuration properties which are required for the view instance, at the time we select the Remote Cluster option for a view instance. Although, ensure that the Remote Cluster includes all services which are must for the view we are configuring.

c. Custom

Further, we must select Custom and then manually configure the view to work with the cluster, if our cluster is remote from and not being managed by the Ambari Server running the view.

Do you know about Ambari Web UI

Now, to determine which options are available for view configuration, refer below points:

If our cluster is remote from the standalone Ambari Server running the view as well as managed by Ambari, then we must go for Remote Cluster We can go for Local Cluster if our cluster is managed by a local Ambari Server which is also running the view. If our cluster is not managed by Ambari and also it is remote from the standalone Ambari Server running the view, then we need to go for Custom. 3. Creating Ambari View Instances

In order to create a View instance, we need to follow these steps:

Steps for creating Ambari View Instances

At first, browse to a View on the Ambari Admin page and also expand it. Then click on create Instance. Now refer to the following information, to create Views, this will show which items are required and why:

Let’s discuss Ambari Troubleshooting

a. Item View version Required? yes Description It isan exact version to instantiate b. Item Instance name Required? yes Description Make sure, name unique to the selected View c. Item Display label Required? Yes Description It shows the readable display name of the View instance in the Ambari Web. d. Item Display Required? Yes Description It displays the readable description of the View instance in Ambari Web

Have a look at Ambari Architecture

e. Item Visible Required? No Description Basically, it shows whether the View is visible or not to the end-user in Ambari Web. Make sure to use this property to temporarily hide a view from users. f. Item Settings Required? Maybe Description On the basis of View, a group of settings which we can customize. We prompt toofferthe required information if we require a setting. g. Item Cluster configuration Required? May Be Description We can choose a local or remote cluster, or manually configure a custom View on the basis of View.

Be ensure about above items/information before creating views.

Further, the choice of Local Cluster will be available, if Ambari has a Cluster configured which will work with the View instance. Else, the choice of Remote Cluster will also be available, if we have registered one or more Remote Clusters. Moreover, we will have to enter the Custom configuration manually, if neither Local or Remote clusters are available.

Let’s take a tour of top Ambari Features

So, this was all in Ambari Views Tutorial. Hope you like our explanation.

4. Conclusion:Ambari Views

Hence, we have seen the concept of Apache Ambari Views along with Configuring and Creating View Instances. Still, if you have any query regarding Ambari Views, ask in the comment tab.


          自从Hadoop的出现,大数据的主要技术是什么?      Cache   Translate Page      

自从Hadoop的出现,大数据的主要技术是什么?

自从Hadoop的出现,引领大数据的浪潮越来越热。大数据存储的主要技术路线有几种:

1.Hadoop

2.Cassandra

3.MongoDB

Hadoop是Apache的开源项目,同时有很多商业公司对Hadoop进行版本发行和商业支持,参见:http://wiki.apache.org/hadoop/Distributions%20and%20Commercial%20Support

其中在最有名为人所知的三家:

1.Cloudera


自从Hadoop的出现,大数据的主要技术是什么?

2.Hortonwork


自从Hadoop的出现,大数据的主要技术是什么?

3.MapR


自从Hadoop的出现,大数据的主要技术是什么?

这三个厂商之中,MapR最为封闭;Hortonworks最为开放,产品线全开源,在线文档比较丰富。国内使用Cloudera CDH和Hortonworks的应该是最多的。

准实时计算框架/即席查询

1.CDH的框架有:Impala + Spark;

2.HDP的框架有:Tez + Spark;

3.MapR的框架有:Drill + Tez + Spark。

关于Spark:

2014年大数据最热门的技术路线就是算是Spark了,而且得力于Spark不遗余力的推广和快速成长。Cloudera是最早支持Spark,也是最激进的。下图即是Spark在Cloudera产品线中的定位:


自从Hadoop的出现,大数据的主要技术是什么?

实际上快速计算框架的发展才刚刚开始,社区中已经有如下几种:

1.Spark/Shark

2.Hortonworks Tez/Stinger

3.Cloudera Impala

4.Apache Drill

5.Apache Flink

6.Apache Nifi

7.Facebook Presto


          Top Ambari Interview Questions and Answers 2018      Cache   Translate Page      
1. Ambari Interview Preparation

In our last article, we discussed Ambari Interview Questions and Answers Part 1 . Today, we will see part 2 of top Ambari Interview Questions and Answers. This part contains technical and practical Interview Questions of Ambari, designed by Ambari specialist. If you are preparing for Ambari Interview then you must go through both parts of Ambari Interview Questions and answers. These all are researched questions which will definitely help you to move ahead.

Still, if you face any confusion in these frequently asked Ambari Interview Questions and Answers, we have provided the link of the particular topic. Given links will help you learn more about Apache Ambari .


Top Ambari Interview Questions and Answers 2018

Top Ambari Interview Questions and Answers 2018

2. Best Ambari Interview Questions and Answers

Following are the most asked Ambari Interview Questions and Answers, which will help both freshers and experienced. Let’s discuss these questions and answers for Apache Ambari

Que 1. What are the purposes of using Ambari shell?

Ans.Ambari Supports:

All the functionalities which are available through Ambari web-app. It supports the context-aware availability of commands. completion of a tab. Also, offers optional and required parameter support. Que 2. What is the required action you need to perform if you opt for scheduled maintenance on the cluster nodes?

Ans.Especially, for all the nodes in the cluster, Ambari offers Maintenance mode option. Hence before performing maintenance, we can enable the maintenance mode of Ambari to avoid alerts.

Que 3. What is the role of “ambari-qa” user?

Ans.‘ambari-qa’ user account performs a service check against cluster services that are created by Ambari on all nodes in the cluster.

Que 4. Explain future growth of Apache Ambari?

Ans.We have seen the massive usage of data analysis which brings huge clusters in place, due to increasing demand for big data technologies like Hadoop. Hence, more visibility companies are leaning towards the technologies like Apache Ambari, for better management of these clusters with enhanced operational efficiency.

In addition, HortonWorks is working on Ambari to make it more scalable. Thus, gaining knowledge of Apache Ambari is an added advantage with Hadoop also.

Que 5. State some Ambari components which we can use for automation as well as integration?

Ans.Especially, for automation and Integration, components of Ambari which are imported are separated into three pieces, such as:

Ambari Stacks Blueprints of Ambari Ambari API

However, to make sure that it deals with automation and integration problems carefully, Ambari is built from scratch.

Que 6. In which language is the Ambari Shell is developed?

Ans.In Java language , Ambarishell is developed. Moreover, it is based on Ambari REST client as well as the spring shell framework .

Que 7. State benefits of Hadoop users by using Apache Ambari.

Ans.We can definitely say, the individuals who use Hadoop in their day to day work life, the Apache Ambari is a great gift for them. So, benefits of Apache Ambari :

Simplified Installation process. Easy Configuration and management. Centralized security setup process. Full visibility in terms of Cluster health. Extendable and customizable.

Que 8. Name some independent extensions that contribute to the Ambari codebase?

Ans.They are:

1. Ambari SCOM Management Pack

2. Apache Slider View

Ambari Interview Questions and Answers for freshers Q. 1,2,4,6,7,8 Ambari Interview Questions and Answers for experienced Q. 3,5

Que 9. Can we use Ambari python Client to use of Ambari API’s?

Ans.Yes.

Que 10. What is the process of creating an Ambari client?

Ans.To create an Ambari client, the code is:

from ambari_client.ambari_api import AmbariClient headers_dict={'X-Requested-By':'mycompany'} #Ambari needs X-Requested-By header client = AmbariClient("localhost", 8080, "admin", "admin", version=1,http_header=headers_dict) print client.version print client.host_url print"n" Que 11. How can we see all the clusters that are available in Ambari?

Ans.In order to see all the clusters that are available in Ambari , the code is:

all_clusters = client.get_all_clusters() print all_clusters.to_json_dict() print all_clusters Que 12. How can we see all the hosts that are available in Ambari?

Ans.To see all the hosts that are available in Ambari, the code is:

all_hosts = client.get_all_hosts() print all_hosts print all_hosts.to_json_dict() print"n" Que 13. Name the three layers, Ambari supports?

Ans.Ambari supports several layers:

Core Hadoop Essential Hadoop Hadoop Support

Learn More about Hadoop

Que 14. What are the different methods to set up local repositories?

Ans.To deploy the local repositories, there are two ways:

Mirror the packages to the local repository. Else, download all the Repository Tarball and start building the Local repository

Que 15. How to set up local repository manually?

Ans.In order to set up a local repository manually, steps are:

At very first, set up a host with Apache httpd. Further download Tarball copy for every repository entire contents. However, one has to extract the contents, once it is downloaded. Ambari Interview Questions and Answers for freshers Q. 13,14,15 Ambari Interview Questions and Answers for experienced Q. 10,11,12 Que 16. How is recovery achieved in Ambari?

Ans.Recovery happens in Ambari in the following ways:

Based on actions

In Ambari after a restart master checks for pending actions and reschedules them since every action is persisted here. Also, the master rebuilds the state machines when there is a restart, as the cluster state is persisted in the database. While actions complete master actually crash before recording their completion, when there is a race condition. Well, the actions should be idempotent this is a special consideration taken. And, those actions which have not marked as complete or have failed in the DB, the master restarts them. We can see these persisted actions in Redo Logs.

Based on the desired state
          Украинские танцоры снялись в рекламе Lexus. Видео      Cache   Translate Page      


В сьемках были задействованы украинские танцоры из Apache Crew.



Танцоры украинского коллектива Apache Crew снялись в рекламе автомобильного бренда Lexus.



Об...



          Army Orders Emergency Fix on Bad Apache Rotor Blades      Cache   Translate Page      
Army aviation units are performing a retrofit to its AH-64E Apache helicopters to ensure that rotors don't break off.

          Introducing Flint: A time-series library for Apache Spark      Cache   Translate Page      

This is a joint guest community blog by Li Jin at Two Sigma and Kevin Rasmussen at Databricks; they share how to use Flint with Apache Spark. Introduction The volume of data that data scientists face these days increases relentlessly, and we now find that a traditional, single-machine solution is no longer adequate to the demands […]

The post Introducing Flint: A time-series library for Apache Spark appeared first on Databricks.


          A Guide to Apache Spark Use Cases, Streaming, and Research Talks at Spark + AI Summit Europe      Cache   Translate Page      

For much of Apache Spark’s history, its capacity to process data at scale and capability to unify disparate workloads has led Spark developers to tackle new use cases. Through innovation and extension of its ecosystem, developers combine data and AI to develop new applications. So it befits developers to come to this summit not just […]

The post A Guide to Apache Spark Use Cases, Streaming, and Research Talks at Spark + AI Summit Europe appeared first on Databricks.


          5 Reasons to Attend Spark + AI Summit Europe 2018      Cache   Translate Page      

The world’s largest event for the Apache Spark Community By Singh Garewal Posted in COMPANY BLOG August 27, 2018 Spark + AI Summit Europe will be held in London on October 2-4, 2018. Check out the full agenda and get your ticket before it sells out! Register today with the discount code 5Reasons and get […]

The post 5 Reasons to Attend Spark + AI Summit Europe 2018 appeared first on Databricks.


          Announcing Databricks Runtime 4.3      Cache   Translate Page      

I’m pleased to announce the release of Databricks Runtime 4.3, powered by Apache Spark.  We’ve packed this release with an assortment of new features, performance improvements, and quality improvements to the platform.   We recommend moving to Databricks Runtime 4.3 in order to take advantage of these improvements. In our obsession to continually improve our platform’s […]

The post Announcing Databricks Runtime 4.3 appeared first on Databricks.


          Java序列化的状态      Cache   Translate Page      

关键要点

  • Java序列化在很多库中引入了安全漏洞。
  • 对序列化进行模块化处于开放讨论状态。
  • 如果序列化能够成为模块,开发人员将能够将其从攻击表面上移除。
  • 移除其他模块可以消除它们所带来的风险。
  • 插桩提供了一种编织安全控制的方法,提供现代化的防御机制。

多年来,Java的序列化功能饱受 安全漏洞 和zero-day攻击,为此赢得了“ 持续奉献的礼物 ”和“ 第四个不可饶恕的诅咒 ”的绰号。

作为回应,OpenJDK贡献者团队讨论了一些用于限制序列化访问的方法,例如将其 提取到可以被移除的jigsaw模块中 ,让黑客无法攻击那些不存在的东西。

一些文章(例如“ 序列化必须死 ”)提出了这样的建议,将有助于防止 某些流行软件(如VCenter 6.5)的漏洞被利用

什么是序列化?

自从1997年发布 JDK 1.1 以来,序列化已经存在于Java平台中。

它用于在套接字之间共享对象表示,或者将对象及其状态保存起来以供将来使用(反序列化)。

在JDK 10及更低版本中,序列化作为java.base包和java.io.Serializable方法的一部分存在于所有的系统中。

GeeksForGeeks对 序列化的工作原理 进行了详细的描述。

有关更多如何使用序列化的代码示例,可以参看Baeldung对 Java序列化的介绍

序列化的挑战和局限

序列化的局限主要表现在以下两个方面:

  1. 出现了新的对象传输策略,例如JSON、XML、Apache Avro、Protocol Buffers等。
  2. 1997年的序列化策略无法预见现代互联网服务的构建和攻击方式。

进行序列化漏洞攻击的基本前提是找到对反序列化的数据执行特权操作的类,然后传给它们恶意的代码。为了理解完整的攻击过程,可以参看Matthias Kaiser在2015年发表的“ Exploiting Deserialization Vulnerabilities in Java ”一文,其中幻灯片第14页开始提供了相关示例。

其他大部分与序列号有关的安全研究 都是基于Chris Frohoff、Gabriel Lawrence和Alvaro Munoz的工作成果。

序列化在哪里?如何知道我的应用程序是否用到了序列化?

要移除序列化,需要从java.io包开始,这个包是java.base模块的一部分。最常见的使用场景是:

使用这些方法的开发人员应考虑使用其他存储和读回数据的替代方法。Eishay Smith发布了 几个不同序列化库的性能指标 。在评估性能时,需要在基准度量指标中包含安全方面的考虑。默认的Java序列化“更快”一些,但漏洞也会以同样的速度找上门来。

我们该如何降低序列化缺陷的影响?

项目Amber 包含了一个关于将序列化API隔离出来的讨论。我们的想法是将序列化从java.base移动到单独的模块,这样应用程序就可以完全移除它。在确定 JDK 11功能集 时并没有针对该提议得出任何结果,但可能会在未来的Java版本中继续进行讨论。

通过运行时保护来减少序列化暴露

一个可以监控风险并自动化可重复安全专业知识的系统对于很多企业来说都是很有用的。Java应用程序可以将JVMTI工具嵌入到安全监控系统中,通过插桩的方式将传感器植入到应用程序中。Contrast Security是这个领域的一个免费产品,它是JavaOne大会的 Duke's Choice大奖得主 。与其他软件项目(如MySQL或GraalVM)类似, Contrast Security的社区版 对开发人员是免费的。

将运行时插桩应用在Java安全性上的好处是它不需要修改代码,并且可以直接集成到JRE中。

它有点类似于面向切面编程,将非侵入式字节码嵌入到源端(远程数据进入应用程序的入口)、接收端(以不安全的方式使用数据)和转移(安全跟踪需要从一个对象移动到另一个对象)。

通过集成每个“接收端”(如ObjectInputStream),运行时保护机制可以添加额外的功能。在从JDK 9移植反序列化过滤器之前,这个功能对序列化和其他攻击的类型(如SQL注入)来说至关重要。

集成这个运行时保护机制只需要修改启动标志,将javaagent添加到启动选项中。例如,在Tomcat中,可以在bin/setenv.sh中添加这个标志:

CATALINA_OPTS=-javaagent:/Users/ecostlow/Downloads/Contrast/contrast.jar

启动后,Tomcat将会初始化运行时保护机制,并将其注入到应用程序中。关注点的分离让应用程序可以专注在业务逻辑上,而安全分析器可以在正确的位置处理安全性。

其他有用的安全技术

在进行维护时,可以不需要手动列出一长串东西,而是使用像 OWASP Dependency-Check 这样的系统,它可以识别出已知安全漏洞的依赖关系,并提示进行升级。也可以考虑通过像 DependABot 这样的系统进行库的自动更新。

虽然用意很好,但默认的 Oracle序列化过滤器 存在与SecurityManager和相关沙箱漏洞相同的设计缺陷。因为需要混淆角色权限并要求提前了解不可知的事物,限制了这个功能的大规模采用:系统管理员不知道代码的内容,所以无法列出类文件,而开发人员不了解环境,甚至DevOps团队通常也不知道系统其他部分(如应用程序服务器)的需求。

移除未使用模块的安全隐患

Java 9的模块化JDK能够 创建自定义运行时镜像 ,移除不必要的模块,可以使用名为jlink的工具将其移除。这种方法的好处是黑客无法攻击那些不存在的东西。

从提出模块化序列化到应用程序能够实际使用以及使用其他序列化的新功能需要一段时间,但正如一句谚语所说:“种树的最佳时间是二十年前,其次是现在”。

剥离Java的原生序列化功能还应该为大多数应用程序和微服务提供更好的互操作性。通过使用标准格式(如JSON或XML),开发人员可以更轻松地在使用不同语言开发的服务之间进行通信——与Java 7的二进制blob相比,python微服务通常具有更好的读取JSON文档的集成能力。不过,虽然JSON格式简化了对象共享,针对Java和.NET解析器的“ Friday the 13th JSON attacks ”证明了银弹是不存在的( 白皮书 )。

在进行剥离之前,序列化让然保留在java.base中。这些技术可以降低与其他模块相关的风险,在序列化被模块化之后,仍然可以使用这些技术。

为Apache Tomcat 8.5.31模块化JDK 10的示例

在这个示例中,我们将使用模块化的JRE来运行Apache Tomcat,并移除任何不需要的JDK模块。我们将得到一个自定义的JRE,它具有更小的攻击表面,仍然能够用于运行应用程序。

确定需要用到哪些模块

第一步是检查应用程序实际使用的模块。OpenJDK工具jdeps可以对JAR文件的字节码执行扫描,并列出这些模块。像大多数用户一样,对于那些不是自己编写的代码,我们根本就不知道它们需要哪些依赖项或模块。因此,我使用扫描器来检测并生成报告。

列出单个JAR文件所需模块的命令是:

jdeps -s JarFile.jar

它将列出模块信息:

tomcat-coyote.jar -> java.base
tomcat-coyote.jar -> java.management
tomcat-coyote.jar -> not found

最后,每个模块(右边的部分)都应该被加入到一个模块文件中,成为应用程序的基本模块。这个文件叫作module-info.java,文件名带有连字符,表示不遵循标准的Java约定,需要进行特殊处理。

下面的命令组合将所有模块列在一个可用的文件中,在Tomcat根目录运行这组命令:

find . -name *.jar ! -path "./webapps/*" ! -path "./temp/*" -exec jdeps -s {} \; | sed -En "s/.* -\> (.*)/  requires \1;/p" | sort | uniq | grep -v "not found" | xargs -0 printf "module com.infoq.jdk.TomcatModuleExample{\n%s}\n"

这组命令的输出将被写入lib/module-info.java文件,如下所示:

module com.infoq.jdk.TomcatModuleExample{
  requires java.base;
  requires java.compiler;
  requires java.desktop;
  requires java.instrument;
  requires java.logging;
  requires java.management;
  requires java.naming;
  requires java.security.jgss;
  requires java.sql;
  requires java.xml.ws.annotation;
  requires java.xml.ws;
  requires java.xml;
}

这个列表比整个Java模块列表要短得多。

下一步是将这个文件放入JAR中:

javac lib/module-info.java
jar -cf lib/Tomcat.jar lib/module-info.class

最后,为应用程序创建一个JRE:

jlink --module-path lib:$JAVA_HOME/jmods --add-modules ThanksInfoQ_Costlow --output dist

这个命令的输出是一个运行时,包含了运行应用程序所需的恰到好处的模块,没有任何性能开销,也没有了未使用模块中可能存在的安全风险。

与基础JDK 10相比,只用了98个核心模块中的19个。

java --list-modules

com.infoq.jdk.TomcatModuleExample
java.activation@10.0.1
java.base@10.0.1
java.compiler@10.0.1
java.datatransfer@10.0.1
java.desktop@10.0.1
java.instrument@10.0.1
java.logging@10.0.1
java.management@10.0.1
java.naming@10.0.1
java.prefs@10.0.1
java.security.jgss@10.0.1
java.security.sasl@10.0.1
java.sql@10.0.1
java.xml@10.0.1
java.xml.bind@10.0.1
java.xml.ws@10.0.1
java.xml.ws.annotation@10.0.1
jdk.httpserver@10.0.1
jdk.unsupported@10.0.1

运行这个命令后,就可以使用dist文件夹中的运行时来运行应用程序。

看看这个列表:部署插件(applet)消失了,JDBC(SQL)消失了,JavaFX也不见了,很多其他模块也消失了。从性能角度来看,这些模块不再产生任何影响。从安全角度来看,黑客无法攻击那些不存在的东西。保留应用程序所需的模块非常重要,因为如果缺少这些模块,应用程序也无法正常运行。

关于作者

Java序列化的状态 Erik Costlow 是甲骨文的Java 8和9产品经理,专注于安全性和性能。他的安全专业知识涉及威胁建模、代码分析和安全传感器增强。在进入技术领域之前,Erik是一位马戏团演员,可以在三轮垂直独轮车上玩火。

查看英文原文: The State of Java Serialization

 

来自:http://www.infoq.com/cn/articles/java-serialization-aug18

 


          Kafka Admin      Cache   Translate Page      
TX-Irving, We are seeking an Apache Kafka Administrator. The ideal candidate with have strong knowledge and experience with the installation and configuration of Kafka brokers, Kafka Managers. Experience with clustering and high availability configurations. Primary Responsibilities: Hands-on experience in standing up and administering on premise Kafka platform. Experience in Kafka brokers, zookeepers, Kafka
          moodle 3.1 - Se produjo un error mientras se comunicaba con el servidor      Cache   Translate Page      
por ivan Rivas.  

Saludos comunidad, tengo un problema desde hace unas cuantas semanas, pasa que en ocasiones la plataforma moodle deja de funcionar, mostrándose el siguiente error.


Este problema pasa mayormente cuando un grupo masivo de estudiantes (30 o mas) intenta subir sus exámenes a la plataforma virtual a la vez, la solución que le doy inmediatamente es reiniciar el apache y eliminar el contenido de la carpeta "cache", ubicada en moodledata.


Como dato adicional, en el registro de errores de xampp siempre aparece este mensaje cuando la plataforma cae:

  • VirtualAlloc() failed: [0x00000008] Espacio de almacenamiento insuficiente para procesar este comando.
Con respecto a este error, ya he aumentado los valores del max_input_time, memory_limit, post_max_size, upload_max_filesize y max_file_uploads.

Ademas configure el tamaño de la memoria virtual del servidor manualmente.

El servidor cuenta con 40GB(32 GB utilizables) de memoria ram, así que no se a que se deba este error, por favor agradecería muchísimo su ayuda, muchas gracias.


          JBOSS Admin      Cache   Translate Page      
MI-Farmington Hills, JBOSS Admin Farmington Hills, MI Responsibilities and Skills: • At least 6+ years of Experience in Jboss & Apache administration and support, which includes Jboss & Apache – installation, configuration and troubleshooting. • Experience in automating environment builds, administration and deployment operations using standard scripting utilities like Shell scripting. • Good knowledge of Infrastructu
          Bajaj Apache Brand New Motorcycle      Cache   Translate Page      
Location : Kurunegala
Price : Rs. 65,000
Contact : Bandara.
Ad Date : 2018-09-13
More Details


          「サマータイム」はセキュリティホールなのか?      Cache   Translate Page      
8月のセキュリティクラスタをにぎわせた大きな話題は3つありました。「東京オリンピック開催時のサマータイム」「DEFCONやBlack Hat USAなどのサイバーセキュリティ関連のカンファレンス」「Apache Struts 2で見つかった新たな脆弱(ぜいじゃく)性」です。
          3 open source log aggregation tools      Cache   Translate Page      
https://opensource.com/article/18/9/open-source-log-aggregation-tools

Log aggregation systems can help with troubleshooting and other tasks. Here are three top options.

Image by : 
opensource.com
x

Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
How is metrics aggregation different from log aggregation? Can’t logs include metrics? Can’t log aggregation systems do the same things as metrics aggregation systems?
These are questions I hear often. I’ve also seen vendors pitching their log aggregation system as the solution to all observability problems. Log aggregation is a valuable tool, but it isn’t normally a good tool for time-series data.
A couple of valuable features in a time-series metrics aggregation system are the regular interval and the storage system customized specifically for time-series data. The regular interval allows a user to derive real mathematical results consistently. If a log aggregation system is collecting metrics in a regular interval, it can potentially work the same way. However, the storage system isn’t optimized for the types of queries that are typical in a metrics aggregation system. These queries will take more resources and time to process using storage systems found in log aggregation tools.
So, we know a log aggregation system is likely not suitable for time-series data, but what is it good for? A log aggregation system is a great place for collecting event data. These are irregular activities that are significant. An example might be access logs for a web service. These are significant because we want to know what is accessing our systems and when. Another example would be an application error condition—because it is not a normal operating condition, it might be valuable during troubleshooting.
A handful of rules for logging:
  • DO include a timestamp
  • DO format in JSON
  • DON’T log insignificant events
  • DO log all application errors
  • MAYBE log warnings
  • DO turn on logging
  • DO write messages in a human-readable form
  • DON’T log informational data in production
  • DON’T log anything a human can’t read or react to

Cloud costs

When investigating log aggregation tools, the cloud might seem like an attractive option. However, it can come with significant costs. Logs represent a lot of data when aggregated across hundreds or thousands of hosts and applications. The ingestion, storage, and retrieval of that data are expensive in cloud-based systems.
As a point of reference from a real system, a collection of around 500 nodes with a few hundred apps results in 200GB of log data per day. There’s probably room for improvement in that system, but even reducing it by half will cost nearly $10,000 per month in many SaaS offerings. This often includes retention of only 30 days, which isn’t very long if you want to look at trending data year-over-year.
This isn’t to discourage the use of these systems, as they can be very valuable—especially for smaller organizations. The purpose is to point out that there could be significant costs, and it can be discouraging when they are realized. The rest of this article will focus on open source and commercial solutions that are self-hosted.

Tool options

ELK

ELK, short for Elasticsearch, Logstash, and Kibana, is the most popular open source log aggregation tool on the market. It’s used by Netflix, Facebook, Microsoft, LinkedIn, and Cisco. The three components are all developed and maintained by Elastic. Elasticsearch is essentially a NoSQL, Lucene search engine implementation. Logstash is a log pipeline system that can ingest data, transform it, and load it into a store like Elasticsearch. Kibana is a visualization layer on top of Elasticsearch.
A few years ago, Beats were introduced. Beats are data collectors. They simplify the process of shipping data to Logstash. Instead of needing to understand the proper syntax of each type of log, a user can install a Beat that will export NGINX logs or Envoy proxy logs properly so they can be used effectively within Elasticsearch.
When installing a production-level ELK stack, a few other pieces might be included, like Kafka, Redis, and NGINX. Also, it is common to replace Logstash with Fluentd, which we’ll discuss later. This system can be complex to operate, which in its early days led to a lot of problems and complaints. These have largely been fixed, but it’s still a complex system, so you might not want to try it if you’re a smaller operation.
That said, there are services available so you don’t have to worry about that. Logz.io will run it for you, but its list pricing is a little steep if you have a lot of data. Of course, you’re probably smaller and may not have a lot of data. If you can’t afford Logz.io, you could look at something like AWS Elasticsearch Service (ES). ES is a service Amazon Web Services (AWS) offers that makes it very easy to get Elasticsearch working quickly. It also has tooling to get all AWS logs into ES using Lambda and S3. This is a much cheaper option, but there is some management required and there are a few limitations.
Elastic, the parent company of the stack, offers a more robust product that uses the open core model, which provides additional options around analytics tools, and reporting. It can also be hosted on Google Cloud Platform or AWS. This might be the best option, as this combination of tools and hosting platforms offers a cheaper solution than most SaaS options and still provides a lot of value. This system could effectively replace or give you the capability of a security information and event management (SIEM) system.
The ELK stack also offers great visualization tools through Kibana, but it lacks an alerting function. Elastic provides alerting functionality within the paid X-Pack add-on, but there is nothing built in for the open source system. Yelp has created a solution to this problem, called ElastAlert, and there are probably others. This additional piece of software is fairly robust, but it increases the complexity of an already complex system.

Graylog

Graylog has recently risen in popularity, but it got its start when Lennart Koopmann created it back in 2010. A company was born with the same name two years later. Despite its increasing use, it still lags far behind the ELK stack. This also means it has fewer community-developed features, but it can use the same Beats that the ELK stack uses. Graylog has gained praise in the Go community with the introduction of the Graylog Collector Sidecar written in Go.
Graylog uses Elasticsearch, MongoDB, and the Graylog Server under the hood. This makes it as complex to run as the ELK stack and maybe a little more. However, Graylog comes with alerting built into the open source version, as well as several other notable features like streaming, message rewriting, and geolocation.
The streaming feature allows for data to be routed to specific Streams in real time while they are being processed. With this feature, a user can see all database errors in a single Stream and web server errors in a different Stream. Alerts can even be based on these Streams as new items are added or when a threshold is exceeded. Latency is probably one of the biggest issues with log aggregation systems, and Streams eliminate that issue in Graylog. As soon as the log comes in, it can be routed to other systems through a Stream without being processed fully.
The message rewriting feature uses the open source rules engine Drools. This allows all incoming messages to be evaluated against a user-defined rules file enabling a message to be dropped (called Blacklisting), a field to be added or removed, or the message to be modified.
The coolest feature might be Graylog’s geolocation capability, which supports plotting IP addresses on a map. This is a fairly common feature and is available in Kibana as well, but it adds a lot of value—especially if you want to use this as your SIEM system. The geolocation functionality is provided in the open source version of the system.
Graylog, the company, charges for support on the open source version if you want it. It also offers an open core model for its Enterprise version that offers archiving, audit logging, and additional support. There aren’t many other options for support or hosting, so you’ll likely be on your own if you don’t use Graylog (the company).

Fluentd

Fluentd was developed at Treasure Data, and the CNCF has adopted it as an Incubating project. It was written in C and Ruby and is recommended by AWS and Google Cloud. Fluentd has become a common replacement for Logstash in many installations. It acts as a local aggregator to collect all node logs and send them off to central storage systems. It is not a log aggregation system.
It uses a robust plugin system to provide quick and easy integrations with different data sources and data outputs. Since there are over 500 plugins available, most of your use cases should be covered. If they aren’t, this sounds like an opportunity to contribute back to the open source community.
Fluentd is a common choice in Kubernetes environments due to its low memory requirements (just tens of megabytes) and its high throughput. In an environment like Kubernetes, where each pod has a Fluentd sidecar, memory consumption will increase linearly with each new pod created. Using Fluentd will drastically reduce your system utilization. This is becoming a common problem with tools developed in Java that are intended to run one per node where the memory overhead hasn’t been a major issue.

          Apache NiFi fails to connect to Solr (LocalHost)      Cache   Translate Page      
Apache NiFi fails to connect to Solr (LocalHost), using hortonworks VM for college coursework. Not sure why it keeps failing, tried Googleing for hours! (Budget: $10 - $30 USD, Jobs: Apache, Linux, MySQL, Nginx, System Admin)
          ebook: Aggregating Data with Apache Spark™      Cache   Translate Page      
Learn why cluster computing makes Spark the ideal processing engine for complex aggregations, the different types of aggregations that you can do with Spark, and more.
          What is serverless?      Cache   Translate Page      
https://enterprisersproject.com/article/2018/9/what-serverless

Let’s examine serverless and Functions-as-a-Service (FaaS), how they fit together, and where they do and don’t make sense

By
CIO digital transformation
You likely have heard the term serverless (and wondered why someone thought it didn’t use servers). You may have heard of Functions-as-a-Service (FaaS) – perhaps in the context of Lambda from Amazon Web Services, introduced in 2014. You’ve probably encountered event-driven programming in some form. How do all these things fit together and, more importantly, when might you consider using them? Read on.
Servers are still involved; developers just don’t need to think about them in a traditional way.
Let’s start with FaaS. With FaaS, you write code to accomplish some specific task and upload the code for our function to a FaaS provider. The public cloud provider or on-premise platform then does everything else necessary to provision, run, scale, and manage the code. As a developer, you don’t need to do anything other than write your code and wire it up to other functions and services. FaaS provides programmers with an abstraction that allows them to focus on just writing code that takes action in response to events rather than interacting with the underlying server (whether bare metal, virtualized, or containerized).
[ Struggling to explain containers to non-techies? Read also: How to explain containers in plain English. ]
Now enter event-driven programming. Functions run in response to external events. It could be a call generated by a mouse click in a web app. But it could also be in response to some other action. For example, uploading a media file could trigger custom code that transcodes the file into a variety of formats.
Serverless then describes a set of architectural patterns that build on FaaS. Serverless combines custom FaaS code with common back-end services (such as databases and authentication) connected primarily through an event-driven execution model. From the perspective of a developer, these services are all managed by a third-party (whether an ops team or an external provider). Of course, servers are still involved; developers just don’t need to think about them in a traditional way.

Why serverless?

Serverless is an emerging technology area. There’s a lot of interest in the technology and approach although it’s early on and has yet to appear on many enterprise IT radar screens. To understand the interest, it’s useful to consider serverless from both operations and developer perspectives.
PaaS and FaaS are best thought of as being on a continuum rather than being entirely discrete.
For operations teams, one of the initial selling points of FaaS on public clouds was its pricing model. By paying only for an ephemeral (typically stateless) function while it was executing, you “didn’t pay for idle.” In general, while this aspect of serverless is still important to some, it’s less emphasized today. As a broader concept that brings in a wide range of services of which FaaS is just one part, the FaaS pricing model by itself is less relevant.
However, pricing model aside, serverless also allows operations teams to provide developers with a self-service platform and then get out of the way. This is a concept that has been present in platforms like OpenShift from the beginning. Serverless effectively extends the approach for certain types of applications.
The arguably more important aspect of serverless is increased developer productivity. This has two different aspects.
The first is that, as noted earlier, FaaS abstracts away many of the housekeeping details associated with server provisioning and management that are often just overhead for developers. In practice, this may not appear all that different to developers than a Platform-as-a-Service (PaaS). FaaS can even use containers under the covers just like a PaaS typically does. PaaS and FaaS are best thought of as being on a continuum rather than being entirely discrete.
The second is that, by offering common managed services out of the box, developers don’t need to constantly recreate them for new applications.

Where does serverless fit?

Serverless targets specific architectural patterns. As described earlier, it’s more or less wedded to a programming model in which functions and services react to each other in an event-driven and largely asynchronous way. Functions themselves are generally expected to be stateless, handle single tasks, and finish quickly. The fact that the interactions between services and functions are all happening over the network also means that the application as a whole should be fairly tolerant of latencies in these interactions.
You can think of FaaS as both simplifying and limiting.
While there are overlaps between the technologies used by FaaS, microservices, and even coarser-grained architectural patterns, you can think of FaaS as both simplifying and limiting. FaaS requires you to be more prescriptive about how you write applications.
Although serverless was originally most associated with public cloud providers, that comes with a caveat. Serverless, as implemented on public clouds, has a high degree of lock-in to a specific cloud vendor. This is true to some degree even with FaaS, but serverless explicitly encourages bringing in a variety of cloud provider services that are incompatible to varying degrees with other providers and on-premise solutions.
As a result, there’s considerable interest in and work going into open source implementations of FaaS and serverless, such as Knative and OpenWhisk, so that users can write applications that are portable across different platforms.
[ What's next for portable apps? Read also: Disrupt or be disrupted: 3 trends enabling next-level IT agility. ]

The speedy road ahead

Building more modern applications is a top priority for IT executives as part of their digital transformation journeys; it’s seen as the key ingredient to moving faster. To that end, organizations across a broad swath of industries are seeking ways to create new applications more quickly. Doing so involves both making traditional developers more productive and seeking ways to lower the barriers to software development for a larger pool of employees.
Serverless is an important emerging service implementation architecture that will be a good fit for certain types of applications. It will coexist with, rather than replace, architecture alternatives such as microservices used with containers and even just virtual machines. All of these architectural choices support a general trend toward simplifying the developer experience and making developers more productive.

          Running Apache Cassandra on Kubernetes      Cache   Translate Page      

Cassandra operator offers a powerful, open source option for running Cassandra on Kubernetes with simplicity and grace.


          Comment on The High Chaparral Season 1 – Coming 8/28/18 by Doug Wallen      Cache   Translate Page      
<span style="font-family: 'Georgia'">I believe that I may have actually interested my wife in this series (well, Billy Blue seems to have done the hard work ;), she likes his eyes). I may be going through this set a bit faster. We viewed an episode at lunch yesterday and today.<br /> <br /> <b><i>Best Man For the Job</i></b> (1.4) <br /> From IMDB: Buck unknowingly hires a group of Army deserters at the same time an Army patrol stops at High Chaparral documenting fights with the Apaches. When a troop takes high level Apache hostages, Cochise attacks returning the favor.<br /> <br /> Warren Stevens, Lane Bradford, Steven Raines<br /> <br /> Buck makes an impulsive hire and comes to regret the decision as it affects Blue. Blue, still seeking his Father's approval keeps trying to be noticed. The Army and the Indians keep having skirmishes that intrude on the peaceful life of the ranchers. Another strong episode that stays focused on the Cannon family and how the outside world brings trouble. Nice to see that Blue becomes the "Best Man For The Job" in his father's eyes.<br /> <br /> <b><i>A Quiet Day in Tucso</i></b>n (1.5)<br /> From IMDB: Blue needs new boots and the ranch needs supplies but cash is limited. Big John agrees to let Blue join Buck on a trip to town for supplies along with Manolito. However, the three lose their guns and the money on poker, drink, and women.<br /> <br /> Richard Devon, Vaughn Taylor, Ned Romero, John Milford<br /> <br /> The first comical episode that shows great cameraderie between Buck, Mano and Blue. Some of the comedy seems forced, but it does get better as the series progresses. Interesting to watch John try to figure out what has ticked off Victoria (and even more interesting that Buck provides the solution).</span>
          Object not found! for all BuddyPress pages      Cache   Translate Page      

All the user related links in BuddyPress returns the error. Please, help. I have been on this for 2 days. If I go to a URL that is not user related it displays fine. Note, Cletus is the user.

Example Link
http://localhost/winsomeapps/members/cletus/activity/

Error

Object not found!

The requested URL was not found on this server. The link on the referring page seems to be wrong or outdated. Please inform the author of that page about the error.

If you think this is a server error, please contact the webmaster.
Error 404
localhost
Apache/2.4.34 (Unix) OpenSSL/1.0.2o PHP/5.6.37 mod_perl/2.0.8-dev Perl/v5.16.3 

          Comment on Michael Olenick: Debunking Yet Another Misleading Gun Study by For_Christs_Sake      Cache   Translate Page      
ot, but It brings to mind the countless old westerns where the hero of the story foils a plot to sell rifles to the Apaches. Apparently the Apaches weren't allowed to have rifles, wonder why ..
          LXer: How to Setup Apache Subversion with HTTPS Letsencrypt on CentOS 7      Cache   Translate Page      
Published at LXer: Apache Subversion or SVN is open source versioning and revision control software. In this article, we show you how to set up Apache Subversion on the latest CentOS 7 server. We...
          ActiveMQ 5.15.6 发布,JMS 消息服务器      Cache   Translate Page      

ActiveMQ 5.15.6 已发布,更新内容如下:

Bug

  • [AMQ-6954] - Queue page on web console displays URL parameter without proper encoding

Improvement

  • [AMQ-7036] - FailoverTransport should not report errors trying to connect to Slave Broker

  • [AMQ-7038] - AMQP: Update Qpid JMS Proton-J and Netty latest versions

  • [AMQ-7047] - Add support for TLS hostname verification

注:新版本要求 Java 8 及以上版本。

下载地址:


          PHP 开源框架 MiniFramework 发布 1.4.0 版      Cache   Translate Page      

MiniFramework 是一款遵循 Apache2 开源协议发布的,支持 MVC 和 RESTful 的超轻量级 PHP 开发框架。MiniFramework 能够帮助开发者用最小的学习成本快速构建 Web 应用,在满足开发者最基础的分层开发、数据库和缓存访问等少量功能基础上,做到尽可能精简,以帮助您的应用基于框架高效运行。

MiniFramework于2018年9月13日发布1.4.0版本,变化有:

  • 新增Log类,用于以日志的形式记录代码运行报错和开发者自定义的调试信息。

  • 新增常量LOG_ON,用于控制日志功能的开启和关闭(生产环境建议关闭)。

  • 新增常量LOG_LEVEL,用于定义可被写入日志的错误等级。

  • 新增常量LOG_PATH,用于定义日志存储路径。

  • 新增Debug类的varType方法,用于判断变量类型。

  • 改进优化异常控制相关功能。

MiniFramework 1.4.0 版本下载地址
zip格式:https://github.com/jasonweicn/MiniFramework/archive/1.4.0.zip
tar.gz格式:https://github.com/jasonweicn/MiniFramework/archive/1.4.0.tar.gz

MiniFramework 快速入门文档
地址:http://www.miniframework.com/docv1/guide/

近期版本更新主要变化回顾:

1.3.0

  • 新增Debug类,用于程序代码的调试。

  • 新增Session类的commit方法,用于提交将当前$_SESSION变量存放的数据。

  • 新增Session类的status方法,用于获取当前会话状态。(PHP >= 5.4.0)

  • 新增Upload类的setSaveNameLen方法,用于设置上传文件保存时生成的随机文件名长度。

  • 新增Upload类的saveOne方法,专门用于上传保存单个文件。

  • 改进Upload类的save方法,支持多个文件同时上传保存的新特性。

1.2.0

  • 新增Upload类,用于上传文件。

  • 新增全局函数getFileExtName(),用于获取文件扩展名。

  • 新增全局函数getHash(),用于在分库或分表场景下获取一个指定长度INT型HASH值。

  • 新增常量PUBLIC_PATH,用于定义WEB站点跟目录。

  • 改进Model类,新增支持连贯操作方式查询数据的特性。

1.1.1

  • 修正Registry类命名冲突的bug,将其中的方法unset更名为del。

1.1.0

  • 新增Captcha类,用于生成和校验图片验证码

  • 新增Registry类的unset方法,用于删除已注册的变量

  • 新增全局函数browserDownload(),用于让浏览器下载文件

  • 在App目录中,新增名为Example的控制器,其中包含部分功能的示例代码

1.0.13

  • 改进Db_Mysql中的execTrans方法

  • 改进渲染特性

  • 新增全局函数isImage(),用于判断文件是否为图像格式

  • 新增全局函数getStringLen(),用于获取字符串长度(支持UTF8编码的汉字)

1.0.12

  • 新增Session类,用于读写会话数据

1.0.11

  • 改进转换伪静态地址分隔符的机制

  • 优化路由处理伪静态时的性能

  • 优化部分核心类的属性

  • 优化框架内存占用


          PHP 5.6.38, 7.0.32, 7.1.22 和 7.2.10 发布,多项内容修复      Cache   Translate Page      

PHP(外文名:PHP: Hypertext Preprocessor,中文名:“超文本预处理器”)是一种通用开源脚本语言。语法吸收了C语言、Java和Perl的特点,利于学习,使用广泛,主要适用于Web开发领域。PHP 独特的语法混合了C、Java、Perl以及PHP自创的语法。它可以比CGI或者Perl更快速地执行动态网页。用PHP做出的动态页面与其他的编程语言相比,PHP是将程序嵌入到HTML(标准通用标记语言下的一个应用)文档中去执行,执行效率比完全生成HTML标记的CGI要高许多;PHP还可以执行编译后代码,编译可以达到加密和优化代码运行,使代码运行更快。

PHP 5.6.38

- Apache2
  . Fixed bug #76582 (XSS due to the header Transfer-Encoding: chunked). (Stas)

PHP 7.0.32

- Apache2
  . Fixed bug #76582 (XSS due to the header Transfer-Encoding: chunked). (Stas)

PHP 7.1.22

- Core:
  . Fixed bug #76754 (parent private constant in extends class memory leak).
    (Laruence)
  . Fixed bug #72443 (Generate enabled extension). (petk)

- Apache2:
  . Fixed bug #76582 (Apache bucket brigade sometimes becomes invalid). (stas)

- Bz2:
  . Fixed arginfo for bzcompress. (Tyson Andre)

- gettext:
  . Fixed bug #76517 (incorrect restoring of LDFLAGS). (sji)

- iconv:
  . Fixed bug #68180 (iconv_mime_decode can return extra characters in a 
    header). (cmb)
  . Fixed bug #63839 (iconv_mime_decode_headers function is skipping headers).
    (cmb)
  . Fixed bug #60494 (iconv_mime_decode does ignore special characters). (cmb)
  . Fixed bug #55146 (iconv_mime_decode_headers() skips some headers). (cmb)

- intl:
  . Fixed bug #74484 (MessageFormatter::formatMessage memory corruption with
    11+ named placeholders). (Anatol)

- libxml:
  . Fixed bug #76777 ("public id" parameter of libxml_set_external_entity_loader
    callback undefined). (Ville Hukkam&auml;ki)

- mbstring:
  . Fixed bug #76704 (mb_detect_order return value varies based on argument
    type). (cmb)

- Opcache:
  . Fixed bug #76747 (Opcache treats path containing "test.pharma.tld" as a phar
    file). (Laruence)

- OpenSSL:
  . Fixed bug #76705 (unusable ssl => peer_fingerprint in 
    stream_context_create()). (Jakub Zelenka)

- phpdbg:
  . Fixed bug #76595 (phpdbg man page contains outdated information).
    (Kevin Abel)

- SPL:
  . Fixed bug #68825 (Exception in DirectoryIterator::getLinkTarget()). (cmb)
  . Fixed bug #68175 (RegexIterator pregFlags are NULL instead of 0). (Tim
    Siebels)

- Standard:
  . Fixed bug #76778 (array_reduce leaks memory if callback throws exception).
    (cmb)

- zlib:
  . Fixed bug #65988 (Zlib version check fails when an include/zlib/ style dir
    is passed to the --with-zlib configure option). (Jay Bonci)
  . Fixed bug #76709 (Minimal required zlib library is 1.2.0.4). (petk)

PHP 7.2.10

- Core:
  . Fixed bug #76754 (parent private constant in extends class memory leak).
    (Laruence)
  . Fixed bug #72443 (Generate enabled extension). (petk)
  . Fixed bug #75797 (Memory leak when using class_alias() in non-debug mode).
    (Massimiliano Braglia)

- Apache2:
  . Fixed bug #76582 (Apache bucket brigade sometimes becomes invalid). (stas)

- Bz2:
  . Fixed arginfo for bzcompress. (Tyson Andre)

- gettext:
  . Fixed bug #76517 (incorrect restoring of LDFLAGS). (sji)

- iconv:
  . Fixed bug #68180 (iconv_mime_decode can return extra characters in a 
    header). (cmb)
  . Fixed bug #63839 (iconv_mime_decode_headers function is skipping headers).
    (cmb)
  . Fixed bug #60494 (iconv_mime_decode does ignore special characters). (cmb)
  . Fixed bug #55146 (iconv_mime_decode_headers() skips some headers). (cmb)

- intl:
  . Fixed bug #74484 (MessageFormatter::formatMessage memory corruption with
    11+ named placeholders). (Anatol)

- libxml:
  . Fixed bug #76777 ("public id" parameter of libxml_set_external_entity_loader
    callback undefined). (Ville Hukkam&auml;ki)

- mbstring:
  . Fixed bug #76704 (mb_detect_order return value varies based on argument
    type). (cmb)

- Opcache:
  . Fixed bug #76747 (Opcache treats path containing "test.pharma.tld" as a phar
    file). (Laruence)

- OpenSSL:
  . Fixed bug #76705 (unusable ssl => peer_fingerprint in
    stream_context_create()). (Jakub Zelenka)

- phpdbg:
  . Fixed bug #76595 (phpdbg man page contains outdated information).
    (Kevin Abel)

- SPL:
  . Fixed bug #68825 (Exception in DirectoryIterator::getLinkTarget()). (cmb)
  . Fixed bug #68175 (RegexIterator pregFlags are NULL instead of 0). (Tim
    Siebels)

- Standard:
  . Fixed bug #76778 (array_reduce leaks memory if callback throws exception).
    (cmb)

- zlib:
  . Fixed bug #65988 (Zlib version check fails when an include/zlib/ style dir
    is passed to the --with-zlib configure option). (Jay Bonci)
  . Fixed bug #76709 (Minimal required zlib library is 1.2.0.4). (petk)

下载链接:


          Apache MXNet 1.3.0 发布,支持 Clojure 编程语言      Cache   Translate Page      

Apache MXNet (incubating) 是一款具备效率和灵活性的深度学习框架。它允许你混合符号编程和命令式编程,从而最大限度提高效率和生产力。在其核心是一个动态的依赖调度,它能够自动并行符号和命令的操作。一个图形优化层,使得符号执行速度快,内存使用高效。这个库便携,轻量,而且能够扩展到多个 GPU 和多台机器。

1.3.0 主要更新内容:

  • MXNet 现在支持 Clojure 编程语言

  • MKL-DNN 功能改进

  • Gluon 支持 Synchronized Batch Normalization

  • Gluon Vision Model Zoo 支持 MobileNetV2 预训练模型

  • Gluon RNN layers are now HybridBlocks

更多细节可查阅发行说明

下载地址:


          Gradle 4.10.1 发布,项目自动化构建工具      Cache   Translate Page      

Gradle 是一个基于 Apache Ant 和 Apache Maven 概念的项目自动化构建工具,支持依赖管理和多项目,类似 Maven,但比之简单轻便。它使用一种基于 Groovy 的特定领域语言来声明项目设置,而不是传统的 XML。

Gradle 4.10.1 是针对8月底发布的 4.10 的修复版本:

  • #6656: FileTreeElement.getPath() returns absolute system dependent filepath.

  • #6592: Up-to-date checks for missing files can be incorrect

  • #6612: Gradle fails when no incremental compile snapshot data available.

  • #6582: Gradle 4.10 incorrect ordering between dependencies of dependent tasks.

  • #6558tasks.withType(ScalaCompile::class.java).configureEach fails on multi-project builds.

  • #6653: Double deprecation message when using publishing plugin.

下载地址:

./gradlew wrapper --gradle-version=4.10.1

          Apache RocketMQ 4.3.1 发布,包含改进和 bug 修复      Cache   Translate Page      

Apache RocketMQ 4.3.1 发布了,更新内容如下

改进

  • [ISSUE-395] - 增强事务生成器 API 的兼容性,并将默认 topic 修改为 "TBW102",确保服务器可以向后兼容较旧的客户端

  • [ISSUE-396] - 增强事务性消息实现,为 EndTransactionProcessor 添加管理工具和分离的线性池

  • [ISSUE-430] - 删除与 mqfilter 服务器相关的脚本

Bug 修复

  • [ISSUE-392] - 修复在生产者关闭过程中发现的 Nullpointer 异常

  • [ISSUE-408] - 恢复在合并过程中丢失的代码

详情 http://rocketmq.apache.org/release_notes/release-notes-4.3.1/


          US Army Orders Emergency Fix on Apache Chopper's Rotor Blades      Cache   Translate Page      
Aviation units within the US Army have been conducting an emergency retrofit to AH-64E Apache helicopters since June in order to make sure that fasteners on rotor blades are not defective, it was revealed this week.
          Database Administrator - David Aplin Group - Saskatoon, SK      Cache   Translate Page      
Understanding of Apache, Tomcat, Java, JBOSS, Spring, Hibernate, Struts, J2EE, Javascript/JQuery, HTML and .NET C# considered an asset....
From David Aplin Group - Thu, 02 Aug 2018 06:29:09 GMT - View all Saskatoon, SK jobs
          TuxMachines: today's howtos      Cache   Translate Page      

read more


          Comment on Republicans of New Hampshire (one of the whitest states in the US) nominate a black former police chief for US Congress by Chris B      Cache   Translate Page      
Richard Aubrey- here in Michigan our Republican nominee for Senate is John James, a 37 year-old African American. He’s a West Point graduate, flew 400+ combat missions in Iraq as an Apache helicopter pilot, came home to earn an MBA and became CEO of his family business in Detroit. I’ve been to a couple of rallies and he’s a charismatic speaker with a beautiful young family. He has an uphill battle, being hugely outspent by the campaign of the current Democrat Senator Debbie Stabenow, but I think with enough support he could pull this off and be the future of the Republican Party. You can check him out at Johnjamesforsenate.com.
          Red Hat Security Advisory 2018-2701-01      Cache   Translate Page      
Red Hat Security Advisory 2018-2701-01 - Red Hat JBoss Web Server is a fully integrated and certified set of components for hosting Java web applications. It is comprised of the Apache HTTP Server, the Apache Tomcat Servlet container, Apache Tomcat Connector, JBoss HTTP Connector, Hibernate, and the Tomcat Native library. This release of Red Hat JBoss Web Server 3.1 Service Pack 5 serves as a replacement for Red Hat JBoss Web Server 3.1, and includes bug fixes, which are documented in the Release Notes document linked to in the References. Issues addressed include a denial of service vulnerability.
          Red Hat Security Advisory 2018-2700-01      Cache   Translate Page      
Red Hat Security Advisory 2018-2700-01 - Red Hat JBoss Web Server is a fully integrated and certified set of components for hosting Java web applications. It is comprised of the Apache HTTP Server, the Apache Tomcat Servlet container, Apache Tomcat Connector, JBoss HTTP Connector, Hibernate, and the Tomcat Native library. This release of Red Hat JBoss Web Server 3.1 Service Pack 5 serves as a replacement for Red Hat JBoss Web Server 3.1, and includes bug fixes, which are documented in the Release Notes document linked to in the References. Issues addressed include a denial of service vulnerability.
          Mach Sakai: WE MP5K Apache GBB      Cache   Translate Page      
Mach Sakai: WE MP5K Apache GBB

Mach Sakai checks it the MP5K Apache GBB from WE Airsoft. This uses an open bolt system with fiber reinforced body and functioning cocking handle. It has an ambidextrous fire selector that has 4 modes: safe, single, 3-round burst, and full auto. Find out how it performs in a quick steel challenge by the Japanese YouTube celebrity.


          Pickleball      Cache   Translate Page      
(South 8th and Apache streets). Loaner paddles are available if you don’t have one. Follow these topics:
          Pickleball      Cache   Translate Page      
(South 8th and Apache streets). Loaner paddles are available if you don’t have one. Follow these topics:
          (USA-OH-Brooklyn) Engineer IV      Cache   Translate Page      
Engineer IVinBrooklyn, OHatKey Bank- Corporate Date Posted:9/12/2018 ApplyNot ready to Apply? Share With: Job Snapshot + Employee Type: Full-Time + Location: 4910 Tiedeman Road Brooklyn, OH + Job Type: Information Technology + Experience: Not Specified + Date Posted: 9/12/2018 Job DescriptionAbout the JobThis is a virtual/remote position which can be performed from most states within the United states. This position is as a member of the Web Systems Services team responsible for the engineering and support of the WebSphere Application Server environment and various other web application and web server platforms at Key.Essential Job FunctionsDesign and implement enterprise class web and application server infrastructure solutions. Provide support for purpose of implementing applications on WebSphere Application Server (WAS) and IBM HTTP Server (IHS) - Interaction and communication with multiple teams - Providing continuous improvement ideas to reduce expenses and/or improve efficiency - Defining high-level application platform architectural guidelines - Creating and maintaining documentation across multiple environments on supported technologiesRequired QualificationsRequired SkillsTechnical - Bachelor s degree or equivalent experience - 3+ years of IT experience - Thorough Linux knowledge - Working understanding of web and application server technologies and concepts - Working knowledge of relevant network technologies (TCP/IP, DNS, SSH, HTTP, FTP, SSL, PKI, Firewall and DMZ concepts) - Working knowledge of network and operating system security concepts - Working knowledge of network administration and problem determination skills - Working knowledge of operating system administration (Unix/Linux) and problem determination skills - Working understanding of Java - Working knowledge of databases and database concepts (Oracle, Microsoft SQL Server, DB2) – Working knowledge of scripting and scripting languages (BASH, Perl, Jython, JRuby)Required SkillsProfessional - Working understanding of highly available/fault tolerant solutions - Ability to learn new skills quickly - Proven analytical/problem solving ability - Highly motivated and self-sufficient - Team player with strong communication and interpersonal skills - Ability to work independently and take ownership of initiatives, projects and assignments - Ability to multi-task and manage competing priorities Preferred Skills: - Familiarity with Web and Application Server Technologies (WebSphere Application Server, WebLogic, Apache, Tomcat, JBOSS) - Experience with client connectivity (MS SQL, MQ, DB2, ECI, ODBC, JDBC, etc.) - Experience performing load tests, capacity planning, and application configuration on the following Operating Systems: Linux (Redhat) - Scripting using PERL, Shell, Python, Jython, etc.ABOUT KEY:KeyCorp's roots trace back 190 years to Albany, New York. Headquartered in Cleveland, Ohio, Key is one of the nation's largest bank-based financial services companies, with assets of approximately $134.5 billion at March 31, 2017. Key provides deposit, lending, cash management, insurance, and investment services to individuals and businesses in 15 states under the name KeyBank National Association through a network of more than 1,200 branches and more than 1,500 ATMs. Key also provides a broad range of sophisticated corporate and investment banking products, such as merger and acquisition advice, public and private debt and equity, syndications, and derivatives to middle market companies in selected industries throughout the United States under the KeyBanc Capital Markets trade name. KeyBank is Member FDIC.ABOUT THE BUSINESS:Key Technology and Operations (KTO) is Key Bank s shared services organization for technology, operational, and servicing functions supporting business partners and clients across all lines of business. Within the overall organization, KTO provides efficient, reliable and secure technology; creates an effective variable cost technology delivery model that maximizes the return on IT spend; orchestrates the efficient use of corporate information and technology assets; and supports innovation that creates competitive distinction. KTO is effective and efficient in payment and deposit servicing, loan servicing, exception and dispute processing, investment and support services, sourcing and procurement, as well as enterprise-wide fraud prevention, investigations and operational support to human resources and the Bank s BSA/AML program.FLSA STATUS:ExemptKeyCorp is an Equal Opportunity and Affirmative Action Employer committed to engaging a diverse workforce and sustaining an inclusive culture. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.JobID: 31630BR
          Benarkah BMW Sedang Siapkan G310RR (2019)?      Cache   Translate Page      

BMW Motorrad bakal memperkenalkan 9 model baharu pada penghujung tahun ini dan salah satunya ialah BMW G310RR. Dibina di India, model itu berkongsi platform sama dengan TVS Apache RR 310, manakala trenkuasa dikongsi bersama BMW G310R. Atas kertas G310RR akan berentap dengan Honda CBR300R, Kawasaki Ninja 400 dan KTM RC390. Menurut laporan AsphaltAndRubber, G310RR (2019) kali pertama dikesan […]

The post Benarkah BMW Sedang Siapkan G310RR (2019)? appeared first on PanduLaju.com.my.


          (USA-OK-Oklahoma City) System Administrator      Cache   Translate Page      
System Administrator EOE Statement We are an equal employment opportunity and E-Verify employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, national origin, disability status, gender identity, protected veteran status or any other characteristic protected by law. Description SUMMARY: The System Administrator is responsible for effective provisioning, installation/configuration, operation, and maintenance of systems hardware and software and related infrastructure. This individual participates in technical research and development to enable continuing innovation within the infrastructure. The Systems Administrator will ensure that system hardware, operating systems, software systems, and related procedures adhere to organizational values, enabling all customers, support teams and development teams. This individual is accountable for the following systems: Linux and Windows systems that support IFS infrastructure; Linux, Windows and Application systems that support the Portal; Responsibilities on these systems include system administrator engineering and provisioning, operations and support, maintenance and research and development to ensure continual innovation. ESSENTIAL FUNCTIONS/RESPONSIBILITIES: + Help tune performance and ensure high availability of infrastructure containing both Linux and Windows-based systems. + Maintain infrastructure monitoring and reporting tools. + Develop and maintain configuration management solutions. + Track, record, and confirm all systems and configuration changes. + Create tools to help teams make the most out of the available infrastructure. + Perform additional duties as assigned. Position Requirements + Bachelor’s degree in Computer Science or related field. + Minimum of 8 years of experience serving in the capacity of system administrator, in either Windows or Linux environments. + Minimum of 8 years of experience in applying OS patches and maintenance packs. + Experience installing, configuring, and maintaining services such as Bind, Apache, MySQL, Memcache, rsyslog, OSSEC, ntpd, Icinga, etc. + Experience with Windows servers in virtualized environments. + Experience with virtualization technologies, such as VirtualBox and VMWare. + Experience with virtual host and SSL setups + Experience with the National Airspace System (NAS) is preferred, but not required. SPECIFIC KNOWLEDGE, SKILLS, & ABILITIES: + Experience with Linux servers in virtualized environments. + Experience with Kubernetes, Docker, and Ansible preferred + Familiarity with the fundamentals of Linux scripting languages. + Proficient with network tools such as iptables, traceroute, Linux IPVS, HAProxy, etc. + Strong grasp of configuration management tools. + Familiarity with load balancing, firewalls, etc. + Ability to build and monitor services on production servers. + Knowledge of servers, switches, routing, network diagrams, etc. + Knowledge of CDN and DNS management. + Outstanding interpersonal and communication skills with the ability to effectively communicate across diverse audiences and influence cross functionally. + Ability to multi-task as well as be strategic, creative and innovative in a dynamic, team environment. + Ability to work independently as well as a contributing member of the team. + Proven ability to complete goals/projects on time delivering high quality results. Required work onsite in Oklahoma City. Full-Time/Part-Time Full-Time Position System Administrator Number of Openings 1 Location Oklahoma City, OK About the Organization Changeis, Inc. is an award-winning 8(a) certified, woman-owned small business that provides management consulting and engineering services to both public and private sectors. Changeis' work has resulted in successful execution of numerous programmatic initiatives, development of acquisition-sensitive deliverables, and establishment of a variety of long-term innovative strategic priorities for its customers. Changeis focuses on delivering unparalleled expertise in the areas of strategy and transformation management, investment analysis and acquisition management, governance, and innovation management. Inc. magazine has ranked the management consulting firm, Changeis Inc., among the top 1000 firms on its 35th annual Inc. 5000, the most prestigious ranking of the nation's fastest-growing private companies. Changeis offers a full benefit package that includes medical, dental, and vision, short and long term disability, retirement plan with immediate vesting and company match, and a generous annual leave plan. This position is currently accepting applications.
          (USA-NY-New York) Senior Software Engineer - Full Stack      Cache   Translate Page      
Want to work with driven and knowledgeable engineers using the latest technology in a friendly, open, and collaborative environment? Expand your knowledge at various levels of a modern, big-data driven, micro service stack, with plenty of room for career growth? At Unified, we empower autonomous teams of engineers to discover creative solutions for real-world problems in the marketing and technology sectors. We take great pride in building quality software that brings joy and priceless business insight to our end-users. We're looking for a talented Senior Software Engineer to bolster our ranks! Could that be you? What you'll do: • Gain valuable technology experience at a rapidly growing big data company • Take on leadership responsibilities, leading projects and promoting high quality standards • Work with a top-notch team of engineers and product managers using Agile development methodologies • Design, build and iterate on novel, cutting-edge software in a young, fast-moving industry • Share accumulated industry knowledge and mentor less experienced engineers • Participate regularly in code reviews • Test your creativity at Unified hack-a-thons Who you are: You're a senior engineer who is constantly learning and honing your skills. You love exploring complex systems to reveal possible architectural improvements. You're friendly and always willing to help others accomplish their goals. You have strong communication skills that enable you to describe technical issues in layman's terms. Must have: • 4+ years of professional development experience in relevant technologies • Willingness to mentor other engineers • Willingness to take ownership of projects • Backend development experience with Python, Golang, or Java • Frontend development experience with JavaScript, HTML, and CSS • React and supporting ecosystem tech, e.g. Redux, React Router, GraphQL • Experience supporting a complex enterprise SaaS software platform • Relational databases, e.g. Amazon RDS, PostgreSQL, MySQL • Microservice architecture design principles • Strong personal commitment to code quality • Integrating with third-party APIs • REST API endpoint development • Unit testing • A cooperative, understanding, open, and friendly demeanor • Excellent communication skills, with a willingness to ask questions • Demonstrated ability to troubleshoot difficult technical issues • A drive to make cool stuff and seek continual self-improvement • Able to multitask in a dynamic, early-stage environment • Able to work independently with minimal supervision Nice to have: • Working with agile methodologies • Experience in the social media or social marketing space • Git and Github, including Github Pull-Request workflows • Running shell commands (OS X or Linux terminal) • Ticketing systems, e.g. JIRA Above and beyond: • Amazon Web Services • CI/CD systems, e.g. Jenkins • Graph databases, e.g. Neo4J • Columnar data stores, e.g. Amazon Redshift, BigQuery • Social networks APIs, e.g. Facebook, Twitter, LinkedIn • Data pipeline and streaming tech, e.g. Apache Kafka, Apache Spark
          (USA-NY-New York) Software Engineer - Full-Stack      Cache   Translate Page      
Want to work with driven and knowledgeable engineers using the latest technology in a friendly, open, and collaborative environment? Expand your knowledge at various levels of a modern, big-data driven, micro service stack, with plenty of room for career growth? At Unified, we empower autonomous teams of engineers to discover creative solutions for real-world problems in the marketing and technology sectors. We take great pride in building quality software that brings joy and priceless business insight to our end-users. We're looking for a talented Software Engineer to bolster our ranks! Could that be you? What you'll do: • Gain valuable technology experience at a rapidly growing big data company • Work with a top-notch team of engineers and product managers using Agile development methodologies • Design, build and iterate on novel, cutting-edge software in a young, fast-moving industry • Participate regularly in code reviews • Test your creativity at Unified hack-a-thons Who you are: You're a software engineer who is eager to get more experience with enterprise-level software development, and constantly learning and honing your skills. You love to learn about large systems and make them better by fixing deficiencies and finding inefficient designs. You're friendly and always willing to help others accomplish their goals. You have strong communication skills that enable you to describe technical issues in layman's terms. Must have: • 1+ years of professional development experience in relevant technologies • Backend development experience with Python, Golang, or Java • Relational databases, e.g. Amazon RDS, PostgreSQL, MySQL • Strong personal commitment to code quality • Integrating with third-party APIs • REST API endpoint development • Unit testing • Working with agile methodologies • A cooperative, understanding, open, and friendly demeanor • Excellent communication skills, with a willingness to ask questions • Demonstrated ability to troubleshoot difficult technical issues • A drive to make cool stuff and seek continual self-improvement • Able to multitask in a dynamic, early-stage environment • Able to work independently with minimal supervision Nice to have: • Frontend development experience with JavaScript, HTML, and CSS • React and supporting ecosystem tech, e.g. Redux, React Router, GraphQL • Experience supporting a complex enterprise SaaS software platform • Experience in the social media or social marketing space • Git and Github, including Github Pull-Request workflows • Running shell commands (OS X or Linux terminal) • Microservice architecture design principles • Ticketing systems, e.g. JIRA Above and beyond: • Amazon Web Services • CI/CD systems, e.g. Jenkins • Graph databases, e.g. Neo4J • Columnar data stores, e.g. Amazon Redshift, BigQuery • Social networks APIs, e.g. Facebook, Twitter, LinkedIn • Data pipeline and streaming tech, e.g. Apache Kafka, Apache Spark
          Brig. Gen. Thomas Todd: Army Gives Nod to Boeing to Resume Apache Helicopter Deliveries      Cache   Translate Page      
Brig. Gen. Thomas Todd, program executive officer for aviation, has said the U.S. Army started to accept deliveries of Boeing-built AH-64E Apache helicopters on Aug. 31 after the company kicked off an effort in June to address safety issues with the aircraft’s strap pack nut, Defense News reported Tuesday. The service temporarily stopped deliveries of Apaches […]
          Chicanonautica: On the Lookout for Aztlán      Cache   Translate Page      
By Ernest Hogan

Every summer, my wife (author of Medusa Uploaded) and I, and her mother take a road trip to New Mexico, and other nearby states. The Southwest. AKA, the Wild West. I like to think of it as Aztlán. I keep on the lookout for signs of the world that existed before the coming of tract housing, strip malls, and other air-conditioned delusions. It can often be found in the wide-open post-apocalyptic landscapes, with the ancient ruins, postmodern ghost towns, and roadside datura.

This year we reversed our usual routine, and shot up north through the Apache, Hopi and Navajo reservations, on our way to Utah. Indian country is always a relief after being in the heat island of Phoenix for weeks of killer summer.

I was glad to see that some of the murals on crumbling structures had been touched up--there were even some new ones. Last year they were all looking neglected, and I was afraid that another interesting cultural development was disintegrating under the current political/economic environment. I should have known, outlaw art doesn’t die easy.

Our first stop was Kanab, “Utah’s Little Hollywood,” where movie people stayed while creating their new Americano mythology. Murals romanticize the pioneers, though back in the nineteenth century, everybody called them “emigrants.” The streets bristle with tributes to celluloid cowboys and Indians, some long forgotten.

Emily’s mom asked about a “traditional Mexican restaurant,” but there weren’t any to be found. There weren’t any Mexicans to be seen. I was the closest anybody came, and the white tourists speaking Germanic or Scandinavian languages looked at me with my brown skin and bandido moustache as if I were part of the kitschy decor.

We had better luck in Parowan (a Native word meaning “evil water” . . . hmm), a town on the Old Spanish Trail, from the pre-Anglo days. Right on the Trail was La Villa, touted as a Lozano restaurant--lozano meaning sexy, romantic, hot, gorgeous, but also elegant, haughty, or arrogant. To our delight, it was an old-fashioned place with good Mexican food made by Mexicans. Emily’s mom approved.

Besides Asian tourists, the next non-Anglos we saw were Native Americans in Blanding, also home of the fabulous Dinosaur Museum. They were in and around the Edge of the Cedars Museum. One guy was crossing a street. Some worked there. One woman came in as a customer. Indians were as scarce as Mexicans in Utah. “The Utes and Paiutes are practically invisible,” Emily remarked.

In Bluff, near the Four Corners, but still in Utah, we enjoyed Navajo tacos in the Twin Rocks Cafe. It’s was founded in 1880, built on the ruins of a pueblo that archeologists estimate dates back to 650 A.D. In the Big Rez, Navajos seems to be taking over the town, which is a good thing.

Finally, we cut across Monument Valley (Hollywood’s Texas backdrop) through Arizona into New Mexico. After passing through the Navajo and Zuñi reservations, it was hard to keep up with what tribal lands we were in. And you could see the Indians. The Hispanos were everywhere, along with names in Spanish that go back to before America invaded.

"It feels good to be surrounded by people who speak mySpanish,” said Emily’s mom, who in Utah, tried to talk to the Indians in Spanish, and got only puzzled looks.

Ernest Hogan, contributor to the American Book Award winning Altermundos, took a lot of pictures and notes while vacationing in Aztlán. He will be blogging about it at Mondo Ernesto soon.


          Episode 85: #85: In Control      Cache   Translate Page      

This week Dave and Gunnar talk about: Controlling the mind, the body, and your drones.  Also: RHEL 6.7 beta, JBoss EAP 6.4, and controlling the enterprise with DevOpsDays Austin.

CEQOsL9XIAAezy_

Cutting Room Floor

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          Episode 57: #57: 10¢ Beer Night      Cache   Translate Page      

This week, Dave and Gunnar talk about: Dave’s Russian agents, VMware’s nightmarish Docker future, everyone learns what “navigator” is in Greek.

beer-night-cup

Cutting Room Floor

We Give Thanks

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          Episode 45: #45: DevNation      Cache   Translate Page      

This week Ray Ploski and Langdon White prep Dave and Gunnar for DevNation!

RSS Icon#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000 Subscribe via RSS or iTunes.

Screenshot 2014-03-08 18.07.11

We Give Thanks

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          Episode 33: #33: Beard Phone      Cache   Translate Page      

This week, Dave and Gunnar talk about badBIOS and unreliable narrators, 85% of Android is crap, warrant canaries, and special guest star Adam Clater talking about OpenShift and ownCloud

RSS Icon#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000 Subscribe via RSS or iTunes.

beards

Cutting Room Floor

We Give Thanks

  • Adam Clater for guest starring!
  • Gunnar’s mom for teaching us about warrant canaries
#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          Episode 16: #16: Red Hat Summit, Day 3: Major Hayden lives!      Cache   Translate Page      

Today, Dave and Gunnar talk about the third day of the Red Hat Summit, and interview Major Hayden at last! This one is super-nerdy, that’s fair warning.

RSS Icon#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000 Subscribe via RSS or iTunes.

Just look at him: a man comfortable with the safety of his systems.Just look at him: a man comfortable with the safety of his systems. #source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          TVS Apache 310 launched in Nepal – Price NPR 12.5 lakhs (Rs 7.8 lakhs)      Cache   Translate Page      

Price in Nepal currency is about NPR 12.5 lakhs. As per today's conversion rate, this means it is about Indian Rupees Rs 7.8 lakhs.

The post TVS Apache 310 launched in Nepal – Price NPR 12.5 lakhs (Rs 7.8 lakhs) appeared first on RushLane.


          Colour Photoetch for 1/48 AH-64A Apache Helicopter for Hasegawa kit      Cache   Translate Page      
Colour Photoetch for 1/48 AH-64A Apache Helicopter for Hasegawa kit

Buy Now
          Somos flores en el jardín del Gran Espiritu      Cache   Translate Page      
“No tiene nada que ver con el color de tu piel, sino con la manera en que vives tu vida. Si vives en esta tierra, y tienes ancestros durmiendo en esta tierra, creo que eso te convierte en un nativo de esta tierra. No fui criada para analizar a las personas racialmente. Lo que me enseñaron es que somos flores en el jardín del Gran Espíritu. Compartimos una raíz común, y la raíz es la Madre Tierra. El jardín es hermoso porque tiene diferentes colores, y esos diferentes colores representan tradiciones y culturas diferentes.

”Oh Shinnah, mujer medicina Apache




          Update: Timbers Bar & Grill (Food & Drink)      Cache   Translate Page      

Timbers Bar & Grill 2.5


Device: iOS iPhone
Category: Food & Drink
Price: Free, Version: 2.3 -> 2.5 (iTunes)

Description:

Download the App for delicious deals, special offers, loyalty rewards and online ordering from Timbers Bar & Grill in Nevada. With locations in Lake Mead, Fort Apache, Horizon, Rancho, West Cheyenne, Azure, Sunset and Durango, great food, cocktails and gaming fun offers are right at your fingertips. Scroll through to see what’s happening at your preferred location or tap the App for great carryout or delivery. Catering is also available. Paperless coupons, great food, Happy Hour specials and more information is right on your smart phone with benefits like:

•Easy online ordering (dine in, carryout, delivery, full bar, Happy Hour and more)
•Exclusive specials and savings
•Updates and notifications
•Digital punch card rewards
•And more!

The App is FREE, easy to use and with their 24/7 hours, a great meal is just a tap on the App away from Timbers Bar & Grill, serving the Las Vegas and Henderson communities in Nevada.

What's New

This release includes the following updates and enhancements:

- Performance Enhancements
- Bug Fixes

Timbers Bar & Grill


          Technical Writer II - Alexa - Amazon.com - Seattle, WA      Cache   Translate Page      
Frameworks and Infrastructure with tools like Apache MXNet and TensorFlow, API-driven Services like Amazon Lex, Amazon Polly and Amazon Rekognition to quickly...
From Amazon.com - Mon, 13 Aug 2018 19:22:01 GMT - View all Seattle, WA jobs
          通过快递api获取物流信息示例-快递100      Cache   Translate Page      
import java.io.IOException; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; import org.apache.commons.lang3.StringUtils; import org.apache.http.HttpResponse; import org.apache.http.client.ClientProtocolException; import org.apache.h...
          Flink 的新方向在哪里?这场顶级盛会给出了答案      Cache   Translate Page      
九月的柏林,比杭州多了一丝清冽,与之相对应的,是如火如荼的2018 Flink Forward Berlin(以下简称FFB)会场。在这个初秋,Apache Flink 核心贡献者、行业先锋、实践专家在这里齐聚一堂,围绕Flink发展现状,生态与未来,共话计算之浪潮。值得一提的是,阿里巴巴作为ApacheFlink主要贡献方,受邀参与此次盛会,并发表演...
          Windows安装使用Hadoop3.0.0      Cache   Translate Page      
亲测没毛病。 java 中文API :http://hadoop.apache.org/docs/r1.0.4/cn/
          Spring Cloud之Eureka注册中心及集群      Cache   Translate Page      
创建项目 创建的网站http://start.spring.io/ 创建两个springboot工程,一个作为注册中心,一个作为测试客户端,注意要导入(eureka-server),创建的界面如下 也可以用IDEA 来创建 依赖的配置 <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3....
          インドの二輪車大手TVS、アパッチの累計販売が300万台突破!      Cache   Translate Page      


アジア経済ニュースNNA ASIAは2018年09月12日に、インドの二輪車大手TVSモーター(TVS Motor)は2018年09月10日に、2005年に発売したモーターバイク「アパッチ(Apache)」シリーズの販売台数が累計300万台を突破したと発表した。

https://time-az.com/main/detail/65794

「アパッチ」は、TVSがレーシング部門子会社と連携して開発したモデルで、発売以来、走りを追求する層から厚い支持を得ているという。

TVSモーターの社長兼最高経営責任者(CEO)ラダクリシュナン(K. N. Radhakrishnan)は300万台達成に際し、「アパッチはレーシングモデルの血統を受け継ぐ性能やデザインで、国内外の消費者から広く愛されるモデルに成長した。」などとコメントした。

「アパッチ」は、レース仕様のRTR(Race, Throttle, Response/レース・スロットル・レスポンス)タイプとスポーツ仕様のRR(Race-Replica/レース・レプリカ)タイプの2仕様から、排気量160〜310ccのモデルを販売している。


          Running Apache Cassandra on Kubernetes      Cache   Translate Page      

Cassandra operator offers a powerful, open source option for running Cassandra on Kubernetes with simplicity and grace.


          Senior Software Development Engineer – Big Data, AWS Elastic MapReduce (EMR) - Amazon.com - Seattle, WA      Cache   Translate Page      
Amazon EMR is a web service which enables customers to run massive clusters with distributed big data frameworks like Apache Hadoop, Hive, Tez, Flink, Spark,...
From Amazon.com - Wed, 05 Sep 2018 01:21:08 GMT - View all Seattle, WA jobs
          Technical Writer II - Alexa - Amazon.com - Seattle, WA      Cache   Translate Page      
Frameworks and Infrastructure with tools like Apache MXNet and TensorFlow, API-driven Services like Amazon Lex, Amazon Polly and Amazon Rekognition to quickly...
From Amazon.com - Mon, 13 Aug 2018 19:22:01 GMT - View all Seattle, WA jobs
          Kafka Admin      Cache   Translate Page      
TX-Irving, We are seeking an Apache Kafka Administrator. The ideal candidate with have strong knowledge and experience with the installation and configuration of Kafka brokers, Kafka Managers. Experience with clustering and high availability configurations. Primary Responsibilities: Hands-on experience in standing up and administering on premise Kafka platform. Experience in Kafka brokers, zookeepers, Kafka
          TVS Launches Apache RR310 & NTORQ 125 in Nepal      Cache   Translate Page      
Today TVS launched their premium motorcycle & scooter, the TVS Apache RR 310 & the NTORQ 125 at the National Automobile Dealers Association (NADA) Auto Show 2018 held in Kathmandu, Nepal. Both these vehicles are TVS’s latest offerings from their stable, while the Apache RR310 boasts performance and dynamics, the NTORQ 125 has been built on the […]
          How to collaborate on docs stored on your WebDAV server with ONLYOFFICE      Cache   Translate Page      
This guide explains how to connect your WebDAV server to ONLYOFFICE and edit and collaborate on documents online. WebDAV stands for Web-based Distributed Authoring and Versioning and is a set of extensions to the HTTP protocol that allows users to directly edit files on the Apache server so that they do not need to be downloaded/uploaded via FTP.
          JSP throwing java.lang.NoClassDefFoundError: javax/xml/rpc/Service PROBLEM      Cache   Translate Page      
Hi,
When running a java class for OracleOndemand Attachments in JAVA APPLICATION i.e. in main class its RUNNING

public static void main(String args[])

{
String id=com.AttachmentQuery.getAccountId("Acc121", "Prospect"); //NO ERROR
String att_id=AttachmentQuery.getAttachmentId("Acc121", "Prospect", "attchment_up");//NO ERROR
System.out.println("account id and attachment id:->"+id+"&&"+att_id);
}

BUT WHEN THESE ABOVE LINES OF CODE WRITTEN IN JSP THEN ITS THROWING ERROR



<%@ page language="java" contentType="text/html; charset=ISO-8859-1"
pageEncoding="ISO-8859-1"%>

<%@ page import="com.AttachmentQuery" %>
<%@ page import="javax.xml.rpc.*" %>
<%@ page import="com.Authentication" %>
<%@ page import="com.AttachmentQuery" %>

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>Attachment info</title>
<%
try
{
String id=com.AttachmentQuery.getAccountId("3M Company", "Customer Bill to");//ERROR att_id=AttachmentQuery.getAttachmentId("3M Company", "Customer Bill to", "SDS Communication Nyflex 222B - 3M - 2011");//Error
System.out.println("account id and attachment id:->"+id+"&&"+att_id);



}
catch(Exception e)
{
System.out.println("In exception:->"+e);     
}
%>
</head>
<body>

</body>
</html>


EXCEPTION IS----->>>>>
Oct 26, 2012 1:45:23 PM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet jsp threw exception


java.lang.NoClassDefFoundError: javax/xml/rpc/Service

     at java.lang.ClassLoader.defineClass1(Native Method)
     at java.lang.ClassLoader.defineClass(Unknown Source)
     at java.security.SecureClassLoader.defineClass(Unknown Source)
     at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2818)
     at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1159)
     at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1647)
     at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1526)
     at java.lang.ClassLoader.loadClassInternal(Unknown Source)
     at org.apache.jsp.attachment_005finfo_jsp._jspService(attachment_005finfo_jsp.java:82)
     at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
     at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
     at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:386)
     at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:313)
     at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:260)
     at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
     at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
     at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
     at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
     at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
     at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
     at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
     at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
     at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
     at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
     at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
     at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
     at java.lang.Thread.run(Unknown Source)

Edited by: Harshad Dani on Oct 26, 2012 2:31 AM

          Una BMW G 310 RR de carbono se ha mostrado en Japón y podría ser el teaser de la deportiva pequeña de BMW      Cache   Translate Page      

Una BMW G 310 RR de carbono se ha mostrado en Japón y podría ser el teaser de la deportiva pequeña de BMW#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Durante años se ha estado rumoreando con una versión deportiva de la gama de acceso de BMW. Partiendo como base de la pequeña roadster BMW G 310 R la firma india con la que los alemanes tienen una joint venture para producir sus motos presentaron la TVS Apache RR 310 y nos hicieron soñar. Ahora estamos un paso más cerca.

Recientemente en el BMW Motorrad Days de Japón la marca lució una preparación en la que de nuevo se ha tomado la base de la naked germana y que podría avanzar un hipotético modelo de producción de la BMW G 310 RR.

Un prototipo de BMW G 310 RR que podría adelantar el modelo de producción

Bmw G 310 Rr 2019 2#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

A la mecánica de la G 310 R se la ha adaptado un carenado inspirado en el de las grandes S 1000 RR, por eso le queda como de una talla más. Por lo demás equipa el mismo chasis tubular de acero, las mismas suspensiones (horquilla invertida y monoamortiguador) y frenos con un solo disco y pinza de anclaje radial delante.

Lo que sí varían es la posición del escape bajo el colín y la ergonomía. Sobre el subchasis se monta un conjunto de asiento-colín más alto, el depósito es más elevado y a las botellas de la horquilla se anclan unos nuevos semimanillares por debajo de la tija superior para acoplarse tras el carenado. Un carenado, por cierto, realizado íntegramente en fibra de carbono que difícilmente llegaría a la producción.

Puede que en un universo muy remoto BMW se haya planteado traer a rebufo de la renovada BMW S 1000 RR que se presentará en unas semanas una deportiva pequeña con la que competir en el mundial de SSP 300. Siendo honestos debemos apuntar que una mecánica monocilíndrica como la que utiliza BMW (313 cc, 34 CV y 28 Nm) poco tiene que hacer contra las monturas de KTM, Kawasaki, Honda y Yamaha.

Bmw G 310 Rr 2019#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Si BMW Motorrad nos sorprende y lanza a la producción la BMW G 310 RR, será el tercer modelo nacido de la alianza de la firma alemana con TVS Motor Company, uno de los fabricantes más poderosos del mundo con 2,5 millones de motos vendidas al año.

El INTERMOT 2018 se celebrará del 3 al 10 de octubre y puede ser que allí nos encontremos con la versión definitiva de esta esperada moto, aunque durante los últimos meses siempre que hemos preguntado al respecto a los alemanes en las presentaciones de BMW nos lo han negado de forma tajante.

Fotos | Oliepeil

También te recomendamos

Un par de chicos listos: smartphones y smart TVs, la revolución del ocio tecnológico mano a mano

A partir de ahora las motos de BMW vendrán de serie con 3 años de garantía sin límite de kilómetros

Mañana se presenta la TVS Apache RR 310, y eso significa que la BMW G 310 RR está más cerca

-
La noticia Una BMW G 310 RR de carbono se ha mostrado en Japón y podría ser el teaser de la deportiva pequeña de BMW fue publicada originalmente en Motorpasion Moto por Jesus Martin .


          (USA-PA-Philadelphia) HIA Data & Analytics - Data Engineer, Senior Associate      Cache   Translate Page      
A career within Data and Analytics Technology services, will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. **Responsibilities** As a Senior Associate, you’ll work as part of a team of problem solvers with extensive consulting and industry experience, helping our clients solve their complex business issues from strategy to execution. Specific responsibilities include but are not limited to: + Proactively assist in the management of several clients, while reporting to Managers and above + Train and lead staff + Establish effective working relationships directly with clients + Contribute to the development of your own and team’s technical acumen + Keep up to date with local and national business and economic issues + Be actively involved in business development activities to help identify and research opportunities on new/existing clients + Continue to develop internal relationships and your PwC brand **Preferred skills** + Strong Java Programming Skills + Strong SQL Query skills + Experience using Java with Spark + Hands-on experience with Hadoop administration and troubleshooting, including cluster configuration and scaling + Hands-on experience with Hive + AWS Experience + Hands-on AWS Console and CLI + Experience with S3, Kinesis Stream + AWS EMS + AWS Lambda + Python **Helpful to have** + AWS ElasticSearch + AWS DynamoDB **Recommended Certs/Training** + AWS Certified Big Data – Specialty **Minimum years experience required** + 3 to 4 years Client Facing Consulting of technical implementation and delivery management experience. **Additional application instructions** Healthcare industry experience - PLS, Payer & Provider experience. Hand-on experience with at least 2-3 leading enterprise data tools/products; 1. Data Integration: Informatica Power Center, IBM Data Stage, Oracle Data Integrator. 2. MDM: Reltio, Informatica MDM, Veeva Network, Reltio. 3. Data Quality: Informatica Data Quality, Trillium. 4. Big Data: Hortonworks, Cloudera, Apache Spark, Kafka, etc... 5. Data Visualization: Denodo 6. Metadata Management: Informatica Metadata Manager, Informatica Live Data Map. 7. Data Stewardship/Governance: Collibra, Elation. All qualified applicants will receive consideration for employment at PwC without regard to race; creed; color; religion; national origin; sex; age; disability; sexual orientation; gender identity or expression; genetic predisposition or carrier status; veteran, marital, or citizenship status; or any other status protected by law. PwC is proud to be an affirmative action and equal opportunity employer. _For positions based in San Francisco, consideration of qualified candidates with arrest and conviction records will be in a manner consistent with the San Francisco Fair Chance Ordinance._ All qualified applicants will receive consideration for employment at PwC without regard to race; creed; color; religion; national origin; sex; age; disability; sexual orientation; gender identity or expression; genetic predisposition or carrier status; veteran, marital, or citizenship status; or any other status protected by law.
          Technical Writer II - Alexa - Amazon.com - Seattle, WA      Cache   Translate Page      
Frameworks and Infrastructure with tools like Apache MXNet and TensorFlow, API-driven Services like Amazon Lex, Amazon Polly and Amazon Rekognition to quickly...
From Amazon.com - Mon, 13 Aug 2018 19:22:01 GMT - View all Seattle, WA jobs
          宝塔linux面板命令大全      Cache   Translate Page      

安装宝塔

Centos安装脚本
yum install -y wget && wget -O install.sh http://download.bt.cn/install/install.sh && sh install.sh
Ubuntu/Deepin安装脚本
wget -O install.sh http://download.bt.cn/install/install-ubuntu.sh && sudo bash install.sh
Debian安装脚本
wget -O install.sh http://download.bt.cn/install/install-ubuntu.sh && bash install.sh
Fedora安装脚本
wget -O install.sh http://download.bt.cn/install/install.sh && bash install.sh

管理宝塔

停止
/etc/init.d/bt stop
启动
/etc/init.d/bt start
重启
/etc/init.d/bt restart
卸载
/etc/init.d/bt stop && chkconfig --del bt && rm -f /etc/init.d/bt && rm -rf /www/server/panel
查看当前面板端口
cat /www/server/panel/data/port.pl
修改面板端口,如要改成8881(centos 6 系统)
echo '8881' > /www/server/panel/data/port.pl && /etc/init.d/bt restart
iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 8881 -j ACCEPT
service iptables save
service iptables restart
修改面板端口,如要改成8881(centos 7 系统)
echo '8881' > /www/server/panel/data/port.pl && /etc/init.d/bt restart
firewall-cmd --permanent --zone=public --add-port=8881/tcp
firewall-cmd --reload
强制修改MySQL管理(root)密码,如要改成123456
cd /www/server/panel && python tools.pyc root 123456
修改面板密码,如要改成123456
cd /www/server/panel && python tools.pyc panel 123456
查看宝塔日志
cat /tmp/panelBoot.pl
查看软件安装日志
cat /tmp/panelExec.log
站点配置文件位置
/www/server/panel/vhost
删除域名绑定面板
rm -f /www/server/panel/data/domain.conf
清理登陆限制
rm -f /www/server/panel/data/*.login
查看面板授权IP
cat /www/server/panel/data/limitip.conf
关闭访问限制
rm -f /www/server/panel/data/limitip.conf
查看许可域名
cat /www/server/panel/data/domain.conf
关闭面板SSL
rm -f /www/server/panel/data/ssl.pl && /etc/init.d/bt restart
查看面板错误日志
cat /tmp/panelBoot
查看数据库错误日志
cat /www/server/data/*.err
站点配置文件目录(nginx)
/www/server/panel/vhost/nginx
站点配置文件目录(apache)
/www/server/panel/vhost/apache
站点默认目录
/www/wwwroot
数据库备份目录
/www/backup/database
站点备份目录
/www/backup/site
站点日志
/www/wwwlogs

Nginx服务管理

nginx安装目录
/www/server/nginx
启动
/etc/init.d/nginx start
停止
/etc/init.d/nginx stop
重启
/etc/init.d/nginx restart
启载
/etc/init.d/nginx reload
nginx配置文件
/www/server/nginx/conf/nginx.conf

Apache服务管理

apache安装目录
/www/server/httpd
启动
/etc/init.d/httpd start
停止
/etc/init.d/httpd stop
重启
/etc/init.d/httpd restart
启载
/etc/init.d/httpd reload
apache配置文件
/www/server/apache/conf/httpd.conf

MySQL服务管理

mysql安装目录
/www/server/mysql
phpmyadmin安装目录
/www/server/phpmyadmin
数据存储目录
/www/server/data
启动
/etc/init.d/mysqld start
停止
/etc/init.d/mysqld stop
重启
/etc/init.d/mysqld restart
启载
/etc/init.d/mysqld reload
mysql配置文件
/etc/my.cnf

FTP服务管理

ftp安装目录
/www/server/pure-ftpd
启动
/etc/init.d/pure-ftpd start
停止
/etc/init.d/pure-ftpd stop
重启
/etc/init.d/pure-ftpd restart
ftp配置文件
/www/server/pure-ftpd/etc/pure-ftpd.conf

PHP服务管理

php安装目录
/www/server/php
启动(请根据安装PHP版本号做更改,例如:/etc/init.d/php-fpm-54 start)
/etc/init.d/php-fpm-{52|53|54|55|56|70|71} start
停止(请根据安装PHP版本号做更改,例如:/etc/init.d/php-fpm-54 stop)
/etc/init.d/php-fpm-{52|53|54|55|56|70|71} stop
重启(请根据安装PHP版本号做更改,例如:/etc/init.d/php-fpm-54 restart)
/etc/init.d/php-fpm-{52|53|54|55|56|70|71} restart
启载(请根据安装PHP版本号做更改,例如:/etc/init.d/php-fpm-54 reload)
/etc/init.d/php-fpm-{52|53|54|55|56|70|71} reload
配置文件(请根据安装PHP版本号做更改,例如:/www/server/php/52/etc/php.ini)
/www/server/php/{52|53|54|55|56|70|71}/etc/php.ini

Redis服务管理

redis安装目录
/www/server/redis
启动
/etc/init.d/redis start
停止
/etc/init.d/redis stop
redis配置文件
/www/server/redis/redis.conf

Memcached服务管理

memcached安装目录
/usr/local/memcached
启动
/etc/init.d/memcached start
停止
/etc/init.d/memcached stop
重启
/etc/init.d/memcached restart
启载
/etc/init.d/memcached reload

          Kociołek żeliwny Farmcook + palenisko „Pan 33”      Cache   Translate Page      

Kociołek myśliwski wraz z paleniskiem to komplet, który pozwoli na łatwe i czyste rozpalanie ogniska i przyrządzanie pod gołym niebem prawdziwych kulinarnych arcydzieł. Przenośne palenisko sprawi, że stałe stanowisko na rozpalanie ognia w ogrodzie będzie zbędne, a z kociołka można będzie korzystać gdziekolwiek, nawet na tarasie.

 

 

DANE TECHNICZNE:

Komplet Farmcook tworzy palenisko o średnicy 60cm z nierdzewnej stali grubości 3mm oraz odlany z żeliwa szarego kociołek myśliwski zamykany za pomocą wieka dociskanego ramieniem ze śrubą. Stabilność gwarantują naczyniu przykręcane wyprofilowane nóżki. Kociołek posiada atest wystawiony przez PZH, który zaświadcza o bezpieczeństwie przygotowywanej w nim żywności.

ZALETY PRODUKTU:

Kociołek jest przystosowany do przygotowywania potraw bezpośrednio w ognisku, w bardzo wysokiej temperaturze. Dlatego też dania z kociołka wyróżniają się tak charakterystycznym smakiem i zapachem. Gotowanie na świeżym powietrzu jest lubiane również za nieocenioną atmosferę spotkań przy ognisku.


          Kociołek węgierski Farmcook Inox na trójnogu 10l      Cache   Translate Page      

Kociołek węgierski to sposób na smaczne, zdrowe i przynoszące mnóstwo frajdy gotowanie na świeżym powietrzu. Przyrządzać w nim można węgierskie zupy i gulasze, a także wszelkie dania jednogarnkowe, takie jak bigos czy leczo. Również konfitury robione w kociołku nad ogniskiem wyróżnia smak, któremu nie sposób się oprzeć.

 

 

DANE TECHNICZNE:

Kociołki węgierskie Farmcook posiadają niezbędne atesty dopuszczające do bezpośredniego kontaktu z żywnością!

Kociołek węgierski Farmcook o pojemności 10l wykonany jest z trwałej i wytrzymałej blachy inox, która bardzo dobrze rozprowadza i utrzymuje ciepło. Naczynie podwieszone jest na wykonanym z pomalowanej proszkowo stali stelażu o wysokości 120cm. Dzięki specjalnej konstrukcji oraz ostrym zakończeniom nóg, trójnóg jest stabilny można rozłożyć nawet na nierównej powierzchni. Mocny łańcuch zakończony odpinanym karabinkiem pozwala na regulowanie wysokości kociołka nad paleniskiem. Naczynie pomieści danie dla 5-7 osób.

ZALETY PRODUKTU:

Kociołek węgierski jest praktycznym i funkcjonalnym narzędziem, pozwalającym przygotowywać nad ogniskiem zachwycające smakiem i zapachem dania. Takie wyposażenie doskonale sprawdzi się na biwaku czy wypadzie za miasto, a także w ogrodzie czy na tarasie (razem z paleniskiem). Kociołek można w prosty sposób odpiąć i umyć, a także zawiesić w jego miejscu ruszt do grilla.


          Kociołek węgierski Farmcook Inox na trójnogu 14l      Cache   Translate Page      

Kociołek węgierski na trójnogu jest funkcjonalnym, uniwersalnym sprzętem sprawdzającym się w wielu zastosowaniach. Wszędzie tam, gdzie tylko da się rozpalić ognisko można w nim przyrządzić pyszne, aromatyczne potrawy. Biwak, wycieczka za miasto czy też weekendowy relaks w ogrodzie to doskonała okazja, by ugotować w kociołku zachwycające smakiem i zapachem danie.

 

DANE TECHNICZNE:

Wykonany z mocnej, trwałej, wytrzymałej blachy inox kociołek cechuje się równomiernym rozprowadzaniem i długotrwałym utrzymywaniem ciepła. Podwieszony jest na stelażu wykonanym ze stali pomalowanej proszkowo przy pomocy zakończonego odpinanym karabinkiem łańcucha. Mocny i gruby łańcuch daje możliwość regulowania wysokości kociołka nad płomieniami. Dzięki ruchomej konstrukcji trójnóg można rozstawić nawet na nierównej powierzchni. Kociołek ma pojemność 14 litrów (na 6-9 osób), stelaż jest wysoki na 120cm. Kociołki Farmcook mają niezbędne atesty dopuszczające je do całkowicie bezpośredniego kontaktu z żywnością!

ZALETY PRODUKTU:

Kociołek węgierski stanowi świetną alternatywę dla bardzo popularnych urządzeń do grillowania. Spotkanie przy ognisku w gronie rodziny lub przyjaciół uświetnione pysznościami z kociołka to pomysłowy sposób na spędzanie wolnego czasu na świeżym powietrzu.


          Zestaw Farmcook: kociołek węgierski Inox 10l z pokrywką na trójnogu + palenisko      Cache   Translate Page      

Zestaw Farmcook z kociołkiem węgierskim to sposób na pyszne, zdrowe, proste i sprawiające przyjemność gotowanie pod gołym niebem. Zarówno tradycyjne zupy i gulasze węgierskie, jak i wszystkie inne jednogarnkowe dania, czy też konfitury pichcone nad ogniskiem wyróżnia smak, któremu mało kto jest w stanie się oprzeć.

 

DANE TECHNICZNE:

Zestaw Farmcook obejmuje 3 elementy: kociołek węgierski 10l, trójnóg 120cm z łańcuchem oraz palenisko o średnicy 60cm. Kociołek wykonany z trwałej i wytrzymałej blachy inox doskonale rozprowadza i utrzymuje ciepło. Naczynie podwieszone jest na stelażu wykonanym przy pomocy zakończonego praktycznym karabinkiem łańcucha. Ozdobne palenisko ze stali pomalowanej proszkowo pozwala na bezpieczne i łatwe rozpalanie ognia w dowolnym miejscu. Mocny i gruby łańcuch daje możliwość regulowania wysokości kociołka nad paleniskiem. Gwarancja bezpieczeństwa: urządzenia marki Farmcook posiadają niezbędne atesty dopuszczające do bezpośredniego kontaktu z żywnością. Kociołek o pojemności 10 litrów wystarczy do przyrządzania posiłków na 5-7 osób.

ZALETY PRODUKTU:

Zestaw Farmcook to komplet praktycznych i funkcjonalnych narzędzi, pozwalających przyrządzać potrawy zniewalające smakiem i zapachem. Kociołek można zabrać ze sobą na biwak lub wycieczkę za miasto, ustawić w ogrodzie czy na tarasie. Można też zawiesić na trójnogu ruszt i korzystać z grilla ogniskowego. Samo palenisko jest też pomysłowym sposobem na oświetlenie tarasu czy zakątka ogrodu.


          Evaluating Apache Hadoop Software for Big Data ETL Functions      Cache   Translate Page      
IT Best Practices: Intel IT recently evaluated Apache Hadoop software for ETL (extract, transform, and load) functions. We first studied industry sources to learn the advantages and disadvantages of using Hadoop for big data ETL functions. We then tested what we learned with a real business use case that involved analyzing system logs as [...]
          Тамуш: КЛАССНЫЙ плотный узор крючком!      Cache   Translate Page      

Это цитата сообщения галина5819 Оригинальное сообщениеКЛАССНЫЙ плотный узор крючком! Groovyghan или Tears of Apache (слезы Apache). Вяжется на одном дыхании!

 

Когда мы хотим использовать вязание крючком для обивки, нам нужен толстый и плотный узор, который похож на ткань. Этот узор, который имеет геометрический мотив и прекрасно подходит в качестве покрытия для стульев, покрывал, подушек.

PUNTO-CROCHET (640x392, 154Kb)

 

Посмотрите видео ниже, нажав кнопку воспроизведения.





 

 

Aprende a tejer Punto Maravilloso (Groovyghan


          After Upgrade No Inbox or Sent Box      Cache   Translate Page      

Replies: 0

Hi Everyone,
I have upgraded to BuddyPress: 3.1.0 and now i can’t seem to access messages and i am using Thrive Nouveau Child theme.

Browser Developer Browser error

JQMIGRATE: Migrate is installed, version 1.4.1
http://www.google-analytics.com/analytics.js:1 Failed to load resource: net::ERR_BLOCKED_BY_CLIENT
underscore.min.js?ver=1.8.3:5 Uncaught SyntaxError: Unexpected token ;
at new Function (<anonymous>)

System Info

// Generated by the Send System Info Plugin //

Multisite: No

WordPress Version: 4.9.8
Permalink Structure: /%postname%/
Active Theme: Thrive Nouveau Child

PHP Version: 7.2.9
MySQL Version: 5.6.40
Web Server Info: Apache

WordPress Memory Limit: 64MB
PHP Safe Mode: No
PHP Memory Limit: 256M
PHP Upload Max Size: 1000M
PHP Post Max Size: 1000M
PHP Upload Max Filesize: 1000M
PHP Time Limit: 600
PHP Max Input Vars: 10000
PHP Arg Separator: &
PHP Allow URL File Open: No

WP_DEBUG: Disabled

WP Remote Post: wp_remote_post() works

Session: Enabled
Session Name: PHPSESSID
Cookie Path: /
Save Path: /var/cpanel/php/sessions/ea-php72
Use Cookies: On
Use Only Cookies: On

DISPLAY ERRORS: N/A
FSOCKOPEN: Your server supports fsockopen.
cURL: Your server supports cURL.
SOAP Client: Your server has the SOAP Client enabled.
SUHOSIN: Your server does not have SUHOSIN installed.

ACTIVE PLUGINS:

404 to 301: 3.0.1
Absolutely Glamorous Custom Admin: 6.4.1
bbP private groups: 3.6.7
bbPress: 2.5.14
BP Block Users: 1.0.2
BP Local Avatars: 2.2
BP Simple Front End Post: 1.3.4
BuddyBlog: 1.3.2
BuddyKit: 0.0.3
BuddyPress: 3.1.0
BuddyPress Clear Notifications: 1.0.4
BuddyPress Docs: 2.1.0
BuddyPress Global Search: 1.1.9
BuddyPress Security Check: 3.2.2
BuddyPress User To-Do List: 1.0.4
GA Google Analytics: 20180828
GD bbPress Toolbox Pro: 5.2.1
GDPR: 2.1.0
Gears: 4.1.9
Gravity Forms: 2.3.2
Hide Admin Bar From Front End: 1.0.0
Image Upload for BBPress: 1.1.15
Insert Headers and Footers: 1.4.3
Invite Anyone: 1.3.20
iThemes Security Pro: 5.4.8
Kirki Toolkit: 3.0.33
MediaPress: 1.4.2
MediaPress – Downloadable Media: 1.0.3
Menu Icons: 0.11.2
Nifty Menu Options: 1.0.1
Really Simple SSL: 3.0.5
reSmush.it Image Optimizer: 0.1.16
Restricted Site Access: 7.0.1
s2Member Framework: 170722
SeedProd Coming Soon Page Pro: 5.10.8
Send System Info: 1.3
Slider Revolution: 5.4.7.3
SSL Insecure Content Fixer: 2.7.0
TaskBreaker – Group Project Management: 1.5.1
The Events Calendar: 4.6.23
The Events Calendar PRO: 4.4.30.1
Users Insights: 3.6.5
WP-Polls: 2.73.8
WPBakery Page Builder: 5.5.4
WP Mail SMTP: 1.3.3
WP Nag Hide: 1.0
WPS Hide Login: 1.4.3
Yoast SEO: 8.2
YouTube Live: 1.7.10

Thanks

Rockforduk


          Re: Missing token for file downloads      Cache   Translate Page      
by Acs Gabor.  

Dear Dani,

Thanks very much, it was an Apache config issue indeed.

The problem was with a Rewriterule - we had to add the [QSA] flag to make it work.

Best regards,

Gabor


          Słów kilka o maseczkach do twarzy Yves Rocher, które pokochała moja cera!      Cache   Translate Page      
Maseczki do twarzy uwielbiam tak samo, jak peelingi, po te produkty mogłabym sięgać codziennie, nie jest to jednak wskazane więc tego nie robie. Używam ich, tak jak jest to zalecane, czyli 1-2 razy w tygodniu. Uwielbiam maseczki oczyszczające, stosowanie ich ma dla mnie ogromne znaczenie. Mam cerę mieszaną w kierunku suchej dlatego stosuje z tą samą intensywnością maseczki nawilżające, jak i oczyszczające.


Często też używam jednej maseczki po drugiej. Do tej pory bardzo polubiłam oczyszczającą maseczkę do twarzy marki Palmers, chciałam znaleźć jeszcze jedną dobrą maseczkę tego typu i się udało. Maseczka głęboko oczyszczająca Pure System marki Yves Rocher bardzo miło mnie zaskoczyła nie tylko podstawowymi cechami, czyli bardzo ładnym zapachem i odpowiednio gęstą konsystencją, ale też działaniem.


Zawsze po jej użyciu moja cera była nie tylko dobrze oczyszczona, ale też bardzo miła w dotyku, ukojona i zregenerowana. Okazała się być bardzo delikatna i skuteczna zarazem dlatego tak bardzo ją polubiłam. Nigdy nie miałam problemu z jej zmyciem. Zawsze do jej zastosowania zachęcało mnie jej szybkie użycie, bo bardzo łatwo się rozprowadzała na uprzednio oczyszczonej skórze twarzy.


Na tle innych wyróżnia się szybkością działania, wystarczy ją trzymać tylko 3 minuty na twarzy, by móc cieszyć się niezwykle dobrze oczyszczoną skórą twarzy. Kolejną maseczką, którą zachwyciłam się w ostatnim czasie, również jest marki Yves Roche. Zawsze miałam problem z nawilżającymi maseczkami, te, które mi odpowiadały, przeważnie znajdowały się w saszetkach i były przeznaczone jednorazowego użytku.


Ciągły ich zakup nie był ekonomicznym rozwiązaniem, dlatego też przez cały czas poszukiwałam czegoś w większym opakowaniu. Długo zastanawiałam się, czy kupić maseczkę nawilżającą marki Yves Rocher, ale jednak się skusiłam i nie żałuję. Obawiałam się jej zapachu, nie chciałam by, zaskoczyła mnie zbyt świeżym zapachem, typowym dla kosmetyków „for men”, nie stało się tak, dlatego już podczas pierwszego użycia się do niej przekonałam. Zauroczyłam się jej ultralekką formułą, która w porównaniu z innymi bardzo ekspresowo się wchłania. 


Uznałam, że zasługuje na uwagę, bo szybko po jej zastosowaniu można zaobserwować pozytywne rezultaty. Dzięki tej maseczce nawilżenie skóry się poprawia, cera odzyskuje zdrowy wygląd, ponieważ jej koloryt zostaje wyrównany. Odwodnieniu może ulec każda cera nawet taka jak moja, ta maseczka jest przeznaczona do skóry odwodnionej, dlatego też jej działanie nawilżające jest tak intensywne i dość długo odczuwalne po jej zastosowaniu. Nigdy nie musiałam usuwać jej nadmiaru, bo zawsze wchłaniała się całkowicie, co uznaje za wielki plus, bo dzięki temu tak naprawdę nie straciłam ani 1 ml tej maseczki. Uważam ją za produkt bardzo wydajny i wart przetestowanie przez wszystkie osoby, które chcą poprawić nawilżenie skóry twarzy.


Koniecznie dajcie znać czy używaliście już tych maseczek! Poinformujcie mnie, jak się u Was sprawdziły!

          A question of security: What is obfuscation and how does it work?      Cache   Translate Page      

Every day, new malware samples are uncovered ranging from zero-day exploits to ransomware variants.

With malware now so common and successful cyberattacks offering potentially high -- albeit criminal -- returns, there is little need for garden-variety hackers to learn how to develop exotic, custom malicious code.

Instead, off-the-shelf malware can be purchased easily by anyone. While some of the most sophisticated forms of malware out there can fetch prices of$7,000 and more, it is also possible to pick up exploit kits and more for far less,or even for free.

The problem with this so-called "commodity" malware is that antivirus companies are well aware of their existence and so prepare their solutions accordingly with signatures that detect the malware families before they can cause damage.

So, how do threat actors circumvent such protection?

This is known as obfuscation.

The goal of obfuscation is to anonymize cyberattackers, reduce the risk of exposure, and hide malware by changing the overall signature and fingerprint of malicious code -- despite the payload being a known threat.

In a Threat Intelligence Bulletin , cybersecurity firm Cylance has explained how the technique works.

"The signature is just a hash," the researchers note. "In this context, a hash refers to a unique, alphanumeric representation of a piece of malware. Signatures very often are hashes, but they can also be some other brief representation of a unique bit of code inside a piece of malware."

CNET: Apple is building an online portal for police to make data requests

Rather than attempt to create a new signature through changing malware itself, obfuscation instead focuses on delivery mechanisms in an attempt to dupe antivirus solutions which rely heavily on signatures. (In comparison to the use of machine learning, predictive analytics, and AI to bolster AV, some researchers argue this has the potential to become obsolete.)

See also: Mirai, Gafgyt IoT botnets stab systems with Apache Struts, SonicWall exploits

Obfuscation can include a variety of techniques to hide malware, creating layers of obscurity which Cylance compares to "nested figures in a Russian doll."

These techniques include:

Packers: These software packages will compress malware programs to hide their presence, making original code unreadable. Crypters: Crypters may encrypt malware programs, or portions of software, to restrict access to code which could alarm an antivirus product to familiar signatures. Dead code insertion: Ineffective, useless code can be added to malware to disguise a program's appearance. Instruction changes: Threat actors may alter instruction codes in malware from original samples that end up changing the appearance of the code -- but not the behavior -- as well as change the order and sequence of scripts. Exclusive or operation (XOR): This common method of obfuscation hides data so it cannot be read unless trained eyes apply XOR values of 0x55 to code. ROT13: This technique is an ASM instruction for "rotate" which substitutes code for random letters.

TechRepublic: Why higher education is one of the worst industries at handling cyberattacks

"While some antivirus products search for common obfuscating techniques so that they too may be blacklisted, this practice is not nearly as well established as the blacklisting of malware payload signatures," the researchers say.

In one interesting example of obfuscation which has recently come under the radar, Cylance found that a Microsoft windows tool called PowerShell is being abused by attackers.

A malware sample obtained by the company was a .ZIP file containing a PDF document and VBS script which used rudimentary Base64 encoding to obfuscate one layer.

This was followed by the use of string splitting, tick marks, and random letter capitalizations to split and alter the signature.

One particular file in the package, 1cr.dat, revealed the use of another obfuscation method. This was a string encryption setup, called SecureString, which is commonly used by legitimate applications to encrypt sensitive strings of code within applications using Microsoft's built-in DPAPI.

The payload also contained instructions to avoid sandboxes, which are used by security researchers to unpack and analyze malware.

At the time of discovery, only three antivirus signature engines detected the attempt at obfuscation and only two registered the malware at first deployment. This has now increased to 18 products.

For as long as malware exists, so too will obfuscation. While there is little that everyday users can do about the attack method, cybersecurity firms are taking notice -- as now, it is not just zero-days which are of concern, but the increasing use of common malware in creative ways.

Cylance said:

"Threat actors are increasingly using obfuscation techniques in combination with commodity malware. This trend runs counter to a widely-held assumption in the information security space which holds that highly customized malware paired with zero-day exploits are deserving of the most attention. And while use of those tools is concerning and should be monitored, attention should not be completely divested from those threat actors - including advanced threat actors - who are succeeding right now at bypassing antivirus products with tools that are not "zero- day" but "every day."


          Get started with REST services with Apache Camel      Cache   Translate Page      
Cloud

REST services are becoming an increasingly popular architectural style for connecting modern systems with the cloud and to each other as the need for flexible APIs and microservices grows. With Apache Camel, you can write REST services easier and quicker using the REST domain-specific language (DSL).

During a poster session at the Grace Hopper Celebration of Women in Computing (September 26 - 28, 2018 in Houston), we will walk audiences through developing their first Camel route using the REST DSL.


read more
          XCP-ng en Kubernetes      Cache   Translate Page      
Replies: 9 Last poster: Bastiaan V at 13-09-2018 15:32 Topic is Open Apache schreef op donderdag 13 september 2018 @ 09:24: Verder is volume management, dynamic provisioning ook enorm eenvoudig op te zetten, dus basic storage is triviaal geworden op een kubernetes cluster, voor meer geavanceerdere zaken is er rook.io of longhorn. En logging is ook enorm makkelijk te fixen, alle containers streamen naar logfiles die door een "DeamonSet" (gegarandeerd 1 container per host) makkelijk naar 1 syslog endpoint, fluentd of logstash doorgestuurd kunnen worden om ze dan in 1 file/elasticsearch/syslog te krijgen met alle metadata die je maar kan wensen.Een kubernetes cluster is al een stukje lastiger om op te zetten en te beheren dan een XCP-ng omgeving. en wie gaat de rook.io omgeving beheren? (Ik kan je uit ervaring vertellen dat dit niet zomaar draait... (nog niet ;-) ) ). Of het optuigen van de logging omgeving? Ik het echt waardevol om je logs te centraliseren, wanneer je ook de vier diensten die je aanbied op 4 vm's kan plaatsen? (Je hebt de logs dan al netjes per dienst 'gecentraliseerd' in /var/log Je brengt met containerisatie de verantwoordelijkheid voor veel van die zaken meer naar de juiste partij, en de overhead van vm's mag zeker niet overschat worden.In de praktijk ben je als beheerder toch de sjaak wanneer bijvoorbeeld owncloud niet zou werken, en dan zal je de door een ander opgebouwde (in verhouding over-complexe) omgeving moeten gaan uitpluizen. De klant / eindgebruiker neemt "het is de schuld van de vendor" namelijk niet als antwoord.. De overhead hoeft bij een diverse omgeving zoals TS beschrijft ook niet heel groot te zijn, en is daarnaast in de praktijk vaan een non-issue. Uren zijn over het algemeen vele malen duurder dan ijzer. overcommitment is interesant in de storage en netwerk wereld. In de compute wereld zou ik er ver vandaan blijven :-)
          DevOps Engineer - Sage - Richmond, BC      Cache   Translate Page      
Apache, IIS, Nginx plus Tomcat, Java, Passenger, Rails, Node.js. We have several vacancies for strong candidates, open to new technologies and approaches, to...
From Sage - Fri, 31 Aug 2018 18:51:55 GMT - View all Richmond, BC jobs
          Apache 编译error: ld returned 1 exit status      Cache   Translate Page      
make 错误 collect2: error: ld returned 1 exit status make[2]: *** [htpasswd] Error 1 make[2]: Leaving directory `/lnmp/tar/httpd-2.4.34/support' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/lnmp/tar/httpd-2.4.34/support' make: *** [all-recursive] Error 1 解决方法: [r...
          Digital Incident Triage Specialist - HSBC - Vancouver, BC      Cache   Translate Page      
Platform knowledge across Unix, Linux, Java, Apache, Database, &amp; MQ technologies. HSBC Bank Canada, a subsidiary of HSBC Holdings plc, is the largest and...
From HSBC - Tue, 04 Sep 2018 18:41:50 GMT - View all Vancouver, BC jobs
          DevOps Engineer - Sage - Richmond, BC      Cache   Translate Page      
Apache, IIS, Nginx plus Tomcat, Java, Passenger, Rails, Node.js. We have several vacancies for strong candidates, open to new technologies and approaches, to...
From Sage - Fri, 31 Aug 2018 18:51:55 GMT - View all Richmond, BC jobs
          phpstudy2018 windows 下搭建https 环境      Cache   Translate Page      
第一: 到阿里云申请免费的证书(如何申请请查看:https://bbs.aliyun.com/read/573056.html?spm=a2c4e.11155515.0.0.effb3ebbE7MJz0),成功签发后,下载证书 第二: 打开phpstudy 下的apache 目录 并建立 cert文件夹,将下载的证书复制一份到cert文件夹中:如下图所示 第三:开启phpstudy 下的openssl 拓展 第四:取消...
          Digital Incident Triage Specialist - HSBC - Vancouver, BC      Cache   Translate Page      
Platform knowledge across Unix, Linux, Java, Apache, Database, &amp; MQ technologies. HSBC Bank Canada, a subsidiary of HSBC Holdings plc, is the largest and...
From HSBC - Tue, 04 Sep 2018 18:41:50 GMT - View all Vancouver, BC jobs
          The Problem With Terraforming Mars      Cache   Translate Page      

Mars has loomed large throughout human history, our imaginations filling its red vistas with fantastic detail long before our space missions returned even rudimentary photos. Back when our best observations of the Red Planet showed only a rusty disc mottled with fuzzy dark patches, people debated whether those marks were natural features, or perhaps the engineering projects of technologically advanced Martians, or maybe something wilder: In 1912, the Salt Lake Tribune ran a headline that read “Mars Peopled by One Vast Thinking Vegetable!,” accompanied by an illustration of a mossy-looking Mars, with one enormous eyeball on a stalk protruding out into deep space.

Because we have imagined Mars so long, it’s easy to forget that Mars’ history is its own. Written into the desert are bygone epochs of the Red Planet, hidden beneath plains peppered with cantankerous-looking boulders, their expanse interrupted by shining, silken dunes and towering volcanoes. Modern Mars research tells us this landscape once boasted vast stretches of water, a warmer climate, and thicker atmosphere, but all have since been lost, leaving the cold, dry surface we see today. In some places, tire tracks mark the record of human exploration—at least by our robotic avatars, the Mars rovers. While Mars is a “dead” planet in the sense of having no notable geologic activity and no known forms of life today, it still has weather (including the massive global dust storm now enfolding NASA’s long-lived Mars Opportunity rover). Unlike the moon, where the entire record of humanity’s off-Earth adventures lays written and unperturbed in the lunar dust, Mars’ winds will eventually wipe these rover tracks away.

For would-be explorers of Mars, these barren plains are a tempting destination—but why anyone wants to go to Mars depends on whom you ask. Some look at the pristine landscapes and imagine that they may answer some of our most pressing questions about the origins and evolution of life in the universe: Has life ever existed on another world? Might it still exist under the Martian surface today? If Mars has ever had life, how different (or not) is it from the life we find on Earth? And if life never got started there (or started, but failed to flourish)—why? Mars’ proximity, and the (relative) ease of translating terrestrial exploration tools to its rocky surface, makes it one of the prime places to both ask and answer these questions.

However, others see these Martian vistas as a blank slate, a drawing board upon which to write a new history for both Mars and humanity. Terraforming, or the idea of radically transforming another world’s environment to be more hospitable to life, has been around for a long time in both science fiction and the scientific literature. One of the most influential books about terraforming straddled the line between fiction and fact: Scientist James Lovelock and writer Michael Allaby’s 1984 novel The Greening of Mars used science fiction to lay out possible means of transforming the Red Planet into literal pastures. While ideas for terraforming methods vary wildly, the basic reasoning is that putting greenhouse gases (usually more carbon dioxide) into the planet’s atmosphere might create enough warming and atmospheric pressure for liquid water to exist on the Martian surface again—a toe hold in the uphill climb toward a once-more habitable Mars. Today, visions of a terraformed Mars come from SpaceX’s promo videos: an animated Mars spins into the future, its surface becoming green, clouds wafting in its thickened atmosphere. Presumably, the water, oxygen, and carbon dioxide implied by these images have been “liberated” by the inescapable enfant terrible of technology, Elon Musk, whose terraforming proposal consists of dropping thermonuclear weapons over the polar ice caps.

Despite terraforming’s hold on the popular imagination, it remains solidly in the realm of fiction. For one thing, Mars seems to lack the necessary reserves of carbon dioxide to pump up its atmosphere and warm it in the first place. Just recently, researchers examined all the inventories of carbon dioxide known from the past several decades of Mars research—concluding that even if one could somehow put it all, from every source, into the atmosphere, it would achieve only minuscule changes in the atmosphere’s pressure and warmth. What’s more, raising the temperature and pressure of the atmosphere only means that any available water won’t boil off immediately, but it would still evaporate, and fast, disappearing into the (still) thin air. Because the Martian atmosphere is incredibly dry, that water would never rain out and return to the ground, as our water does on Earth, and would instead remain sequestered in the (still) dry air. While the authors admit that one could always appeal to possibly hidden deposits of carbonate that haven’t been found yet, making any carbon dioxide available from those deposits would require a planetary-scale strip mining operation to harvest and process it out of the rocks.

In short, a planetary environment is not an empty swimming pool that can simply be refilled with a garden hose and brought back to its previous function. That fact should come as no surprise to anyone paying attention to climate change, a global disaster composed of both inexorable alterations to the habitability of Earth and the ongoing failure of both governments and industry to act with sufficient urgency to preserve it. While we might debate the possibility of transforming the habitability of Mars, we only have a demonstrated track record of unintentionally changing a planet to be less hospitable to humanity and no practicable idea of how to do the reverse.

In many ways, questions of whether we could technically terraform Mars are beside the point—it’s the way we ask them that reveals so much about how we imagine ourselves in relationship to land and environment, especially those here on Earth. After all, one needn’t go to Mars to find a pristine frontier, or at least the idea of one—the concept of wild, untouched land is a deeply embedded part of the American mythos. Wild landscapes were memorialized by the rapturous paintings and prose of naturalists like Thomas Cole and John Muir, and when national parks were first created in the 19th century, they were (and still are) seen as safeguarding something truly unique, precious, and yes, American, for generations to come. In 2017, when the Trump administration announced its intention to shrink the Grand Staircase-Escalante and Bear Ears national monuments, there was widespread public outcry. Around the same time, Utah Rep. Jason Chaffetz, a Republican, introduced a bill into the House that would allow for the sell-off of public lands in Utah. He was forced to withdraw the bill after immediate, broad-based opposition.

But pure wilderness, much like the invocation that humanity’s destiny is to leave the Earth, is an invention—and a relatively recent creation, at that. Though national parks are now beloved treasures, their preservation as untouched wilderness came as a surprise to the many indigenous nations who were actively living on those lands at the time. Because the settlers’ idea of nature was of a place in which humans did not live, the preservation of these lands fundamentally meant the removal of the people who did live there—and because those settlers could not fathom the humanity of native peoples, the separation of people from their homelands drew the blueprints for forcible relocation and assimilation, as well as the reservation system that persists today.

Advocates of humans living on Mars argue that no such ethical quandary exists for Mars: No life seems to live there, and if it does, it is likely mere microbes. After all, we kill microbes all the time here on Earth—in fact, we kill those microbes that might otherwise hitch a ride on our spacecraft and contaminate the environment of Mars. What’s a few more? Even those who are compelled by studying Mars’ own history sometimes argue that the transformation of the Martian environment is a foregone conclusion, so we may as well get it over with. Within this camp, there are researchers who eschew the word terraforming in favor of ecosynthesis, a term borrowed from restoration ecology here on Earth, meaning an intervention to restore a previous disrupted environment. Where “terraforming” implies that the planet will become more like our Earth (terra), ecosynthesis implies that restoring Mars’ previously thick atmosphere, even if unbreathable by humans, is a moral imperative that humanity bears to any Martian life that might survive today. One wonders whether these would-be saviors of Mars would argue for a return to Earth’s early atmosphere, before cyanobacteria provided the oxygen humanity breathes.

Does an environment’s worth exist only in relationship to humankind, or more broadly, in relationship to life? If so, whose life? Must an environment have a purpose to be worthy of existence? If so, whose purpose? In Harper’s this month, Mort Rosenblum and Samuel James’ depiction of copper mining in the Southwestern United States argues that we behave as though the merit of land is only in its use. James’ photographs show a vast, terraced network of pit mines ripped into the ruddy desert, yawning like the mouth of hell. The open pits drool turquoise rivulets of toxic waste into some of the most stunning spaces on the continent—is this environmental devastation the Silver Bell Mine of Arizona, or a future terraforming operation on Mars?

Terraforming might seem like just a particularly difficult engineering challenge, but in reality, it is an escape hatch from the far more difficult task of confronting our past, present, and future here on Earth. When we invoke worlds like Mars as our new frontier, we are erasing the complex history of what frontiers have meant here on Earth, as well as the legacy of inequity that continues on today. We must recognize that the radical transformation of land—whether on this planet or beyond—is also the erasure of history, and in that erasure we may be giving up something profound on Mars, just as we do here on Earth.


          TÉVEZ: “CRISTIANO SE PREPARA PARA SER EL MEJOR DEL MUNDO Y A LEO LE SALE NATURAL”      Cache   Translate Page      
Carlos Tévez tuvo el privilegio de ser uno de los pocos futbolistas que jugó con Cristiano Ronaldo (Manchester United) y con Leo Messi (Selección Argentina). En el marco de una charla con jóvenes en la Parroquia Espíritu Santo de San Isidro, el Apache dejó claro cuál es la principal diferencia entre el astro portugués y

Continuar leyendo →


          Looking for beta testers for our 'out of the oven' Hosted Email Platform      Cache   Translate Page      

Hi LET'ers,

After many months of DevOps, our team was able to launch an open beta for our Hosted Email solution.

As usual, we love to have you all help us catch n' patch em bugs and fine tune the systems

Product Info

CloudCone's Hosted Emails lets you create email boxes for your favorite domains, on a worry-free email platform.

Steps to Join Beta

  1. Visit https://app.cloudcone.com/email/create
  2. Create your first email organization
  3. Add your domain
  4. Verify your domain
  5. Create a new Mailbox
  6. Let us know how it goes 😀

We have a detailed documentation and a how-to: https://help.cloudcone.com/en-us/category/66tatz/

Beta restrictions

  • Limited email sending
  • Limited to creating 1 x email organization
  • Beta clients will have a special rate when official product is announced
  • Beta period will run for 1 week, thread will be updated

Pricing

We have not decided the price yet, however we'd love to keep it lowend <3 And we will also continue the unlimited account and domain policy 😀

Tech specs

Most parts of the system are OpenSource software, including the Postfix mail server, Dovecot and SoGo web mail interface.
SMTP Relay: MailChannels

We've also setup Apache Bayes SPAM Auto Learning, the more you mark as SPAM, the more it'll learn.

Hope to see you aboard. For any bugs or feedback, do leave a comment ♥️
Cheers!


          Arcadia Data Demonstrates 88x Faster Performance for Gaining Insights...      Cache   Translate Page      

Technical Review Shows Performance and Concurrency Advantages for Analytics on Apache Hadoop®

(PRWeb September 13, 2018)

Read the full story at https://www.prweb.com/releases/arcadia_data_demonstrates_88x_faster_performance_for_gaining_insights_from_modern_data_platforms/prweb15758979.htm


          Vandaag op WEG Tryon: Edward, Emmelie en eventingdressuur      Cache   Translate Page      
Wat brengt dag 2 van de Wereldruiterspelen in Tryon? Horses.nl zet het voor u op rij. Vandaag wordt de landenwedstrijd dressuur beslist met het tweede deel van de Grand Prix. Voor Nederland komen Emmelie Scholtens met Apache (18.17 uur) en...
          ApacheConf - company license      Cache   Translate Page      
ApacheConf - company license Free Download
          FS#60043: [apache-ivy] uses old Apache Ant lib path      Cache   Translate Page      
Description:
The current PKGBUILD for apache-ivy symlinks ivy.jar to /usr/share/java/apache-ant/ivy.jar , but the current package for Apache Ant ("ant", which replaced "apache-ant") expects Ivy to be in /usr/share/java/ant/ ("apache-ant" shortened to "ant", as in the new package name).

Additional info:
* apache-ivy 2.4.0-2
* ant 1.10.5-1

Steps to reproduce:
Attempt to build something with an Apache Ant build file which references an Ivy action, and you'll get an error. I first noticed it while building filebot (AUR).
          Trade of the Day: Apache Stock Is a Gusher      Cache   Translate Page      

InvestorPlace - Stock Market News, Stock Advice & Trading Tips

Shares of energy companies such as Apache are waking up. APA stock has popped more than 8% over the past four trading days and, all else being equal, looks to have plenty more upside left.

The post Trade of the Day: Apache Stock Is a Gusher appeared first on InvestorPlace.


          Prince Harry Meets With New Royal Marine Recruits      Cache   Translate Page      

Prince Harry, Duke of Sussex, took over his grandfather Prince Philip’s role as captain general of the marines in December 2017. Nine months later, he has made his first official visit to the Royal Marines Commando Training Center.

As reported by People, the prince arrived in a Royal Navy Wildcat Maritime Attack aircraft at approximately 10 a.m. local time to the base in Lympstone in the south of England. The Wildcat is “a maritime attack helicopter from the Commando Helicopter Force, which provides crucial aerial support to the Royal Marines,” according to Express UK.

Prince Harry served two tours in Afghanistan, where he flew Apache helicopters. He rose to the rank of captain in the Blues and Royals, a regiment of the Household Cavalry, through his service.

Upon arrival, the new Duke of Sussex received a “ceremonial welcome” before meeting with new recruits both in the gym and on the assault course.

Click here to continue and read more...


          Concepteur logiciel sénior C++ - 251804 - Procom - Côte-Saint-Luc, QC      Cache   Translate Page      
Connaissance en développement d’applications Java sous l’environnement J2EE avec les technologies Wildfly, Hibernate, Apache, Eclipse, JSF, Primfaces, HTML, CSS...
From Procom - Mon, 13 Aug 2018 18:15:50 GMT - View all Côte-Saint-Luc, QC jobs
          PHP 5.6.38, 7.0.32, 7.1.22 和 7.2.10 发布,多项内容修复      Cache   Translate Page      

php(外文名:PHP: Hypertext Preprocessor,中文名:“超文本预处理器”)是一种通用开源脚本语言。语法吸收了C语言、Java和Perl的特点,利于学习,使用广泛,主要适用于Web开发领域。PHP 独特的语法混合了C、Java、Perl以及PHP自创的语法。它可以比CGI或者Perl更快速地执行动态网页。用PHP做出的动态页面与其他的编程语言相比,PHP是将程序嵌入到HTML(标准通用标记语言下的一个应用)文档中去执行,执行效率比完全生成HTML标记的CGI要高许多;PHP还可以执行编译后代码,编译可以达到加密和优化代码运行,使代码运行更快。

PHP 5.6.38

- Apache2

. Fixed bug #76582 (XSS due to the header Transfer-Encoding: chunked). (Stas)

PHP 7.0.32

- Apache2

. Fixed bug #76582 (XSS due to the header Transfer-Encoding: chunked). (Stas)

PHP 7.1.22

- Core:

. Fixed bug #76754 (parent private constant in extends class memory leak).

(Laruence)

. Fixed bug #72443 (Generate enabled extension). (petk)

- Apache2:

. Fixed bug #76582 (Apache bucket brigade sometimes becomes invalid). (stas)

- Bz2:

. Fixed arginfo for bzcompress. (Tyson Andre)

- gettext:

. Fixed bug #76517 (incorrect restoring of LDFLAGS). (sji)

- iconv:

. Fixed bug #68180 (iconv_mime_decode can return extra characters in a

header). (cmb)

. Fixed bug #63839 (iconv_mime_decode_headers function is skipping headers).

(cmb)

. Fixed bug #60494 (iconv_mime_decode does ignore special characters). (cmb)

. Fixed bug #55146 (iconv_mime_decode_headers() skips some headers). (cmb)

- intl:

. Fixed bug #74484 (MessageFormatter::formatMessage memory corruption with

11+ named placeholders). (Anatol)

- libxml:

. Fixed bug #76777 ("public id" parameter of libxml_set_external_entity_loader

callback undefined). (Ville Hukkam&auml;ki)

- mbstring:

. Fixed bug #76704 (mb_detect_order return value varies based on argument

type). (cmb)

- Opcache:

. Fixed bug #76747 (Opcache treats path containing "test.pharma.tld" as a phar

file). (Laruence)

- OpenSSL:

. Fixed bug #76705 (unusable ssl => peer_fingerprint in

stream_context_create()). (Jakub Zelenka)

- phpdbg:

. Fixed bug #76595 (phpdbg man page contains outdated information).

(Kevin Abel)

- SPL:

. Fixed bug #68825 (Exception in DirectoryIterator::getLinkTarget()). (cmb)

. Fixed bug #68175 (RegexIterator pregFlags are NULL instead of 0). (Tim

Siebels)

- Standard:

. Fixed bug #76778 (array_reduce leaks memory if callback throws exception).

(cmb)

- zlib:

. Fixed bug #65988 (Zlib version check fails when an include/zlib/ style dir

is passed to the --with-zlib configure option). (Jay Bonci)

. Fixed bug #76709 (Minimal required zlib library is 1.2.0.4). (petk)

PHP 7.2.10

- Core:

. Fixed bug #76754 (parent private constant in extends class memory leak).

(Laruence)

. Fixed bug #72443 (Generate enabled extension). (petk)

. Fixed bug #75797 (Memory leak when using class_alias() in non-debug mode).

(Massimiliano Braglia)

- Apache2:

. Fixed bug #76582 (Apache bucket brigade sometimes becomes invalid). (stas)

- Bz2:

. Fixed arginfo for bzcompress. (Tyson Andre)

- gettext:

. Fixed bug #76517 (incorrect restoring of LDFLAGS). (sji)

- iconv:

. Fixed bug #68180 (iconv_mime_decode can return extra characters in a

header). (cmb)

. Fixed bug #63839 (iconv_mime_decode_headers function is skipping headers).

(cmb)

. Fixed bug #60494 (iconv_mime_decode does ignore special characters). (cmb)

. Fixed bug #55146 (iconv_mime_decode_headers() skips some headers). (cmb)

- intl:

. Fixed bug #74484 (MessageFormatter::formatMessage memory corruption with

11+ named placeholders). (Anatol)

- libxml:

. Fixed bug #76777 ("public id" parameter of libxml_set_external_entity_loader

callback undefined). (Ville Hukkam&auml;ki)

- mbstring:

. Fixed bug #76704 (mb_detect_order return value varies based on argument

type). (cmb)

- Opcache:

. Fixed bug #76747 (Opcache treats path containing "test.pharma.tld" as a phar

file). (Laruence)

- OpenSSL:

. Fixed bug #76705 (unusable ssl => peer_fingerprint in

stream_context_create()). (Jakub Zelenka)

- phpdbg:

. Fixed bug #76595 (phpdbg man page contains outdated information).

(Kevin Abel)

- SPL:

. Fixed bug #68825 (Exception in DirectoryIterator::getLinkTarget()). (cmb)

. Fixed bug #68175 (RegexIterator pregFlags are NULL instead of 0). (Tim

Siebels)

- Standard:

. Fixed bug #76778 (array_reduce leaks memory if callback throws exception).

(cmb)

- zlib:

. Fixed bug #65988 (Zlib version check fails when an include/zlib/ style dir

is passed to the --with-zlib configure option). (Jay Bonci)

. Fixed bug #76709 (Minimal required zlib library is 1.2.0.4). (petk)

下载链接:
          Exhuming the GDPR Bodies      Cache   Translate Page      

GDPR regulation came into full force on May 25 th , 2018. That date represents the end of a two-year ‘running in’ period. It was, if you like, the end of the beginning.

Naturally, most of the attention has fallen ondatabases, as the area of highest risk for data at rest, but is this necessarily true? The corporate file servers and the file shares that reside on them can contain an immense amount of data in many non-database formats. In this article, I am going to describe at how you can uncover sensitive and personal data buried in network file shares and assess the level of risk this poses to your organisation, using tools such as Apache Tika, Bash and PowerShell.

What Is the Problem with File Shares?

Consider some of the facilities that an RDBMS gives us that a file share does not:

A focal point of where data resides

A structure optimized for that data

Enforcement of that structure

Relative clarity of what that structure represents

The means to capture documentation of that structure through tools such asRedgate SQL Doc

A simple means of interrogating the contents and structure of the database through the SQL language

The mindset that goes with administration and development of an RDBMS lends itself to data categorisation, and to structures that support such activity.

A file share gives us none of those things. It is akin to having a large garage or attic into which to stuff things that might be useful one day but will probably never be used. How many of us have said to a colleague “I know that I have it in a file somewhere, but I can’t quite find it right now?” This is why I believe that many businesses need someone with skills similar to those of a librarian, whose job it is to build data catalogs that include file shares as a data source.

Quantifying the Risk Posed by File Shares

To quantify the risk, we must find answers to the following questions:

What types of file do we have?

Where do they reside?

Are those files still in use or is there a legal obligation to keep them?

Who is currently responsible for those files?

Do they contain GDPR sensitive data?

Expect to find a substantial number of old files for which there is no known use and no current owner. There may even be no easy way to open those files, due to their obsolescence. GDPR represents a rare opportunity for digital decluttering.

In some cases, I can answer the risk questions posed above with an acceptable degree of accuracy. In others, I can only give a warning that further, more detailed, investigation needs to take place. For example, if I discover a zip file or a password protected file, then I can acknowledge its existence but further investigation, beyond my scope of operation, would be needed.

Preparation Before Investigation

If I must assess all corporate file shares for sensitive data, then the nature of the task means that both I, and my workstation, present a potential security risk. The approach taken to mitigate such a risk could be:

The creation of a specific virtual workstation for the task

Minimize software on the virtual workstation. No internet access, no email, etc.

The creation of a specific login for me to use to access the workstation and to carry out that work

Enhanced auditing of the workstation and my login

Strict timeboxing of the work.

The toolkit for my virtual workstation was primarily Bash and Powershell, to identify and locate the files, and Apache Tika to look inside those that may contain sensitive or personal data.

Bash and PowerShell?

I use both shells because each one has unique strengths:

PowerShell is useful when interacting with objects

Bash is useful when handling text streams

Limiting use to one shell risks a solution that succumbs to the tyranny of “or,” rather than the genius of “and.” However, I did find some difficulties with a PowerShell-only solution. For example, it does not have a suitable Sort command for the contents of a text file. Community modules exist, though I found them somewhat clunky. I also found that some PowerShell commands did not accept data piped through to them, requiring me to adjust the process to write an intermediate file. Given the size of the task, I was reluctant to do this. Therefore, I use windows subsystem for linux so I can use the shell that is appropriate to the task at hand.

On my workstation, the Linux subsystem lists drives mapped under Windows within the Linux /mnt folder. The C:\ drive will be /mnt/c . The Linux subsystem can use Windows file shares directly, without the need to map those shares to a Windows drive first, but I like being able to capitalize on the strengths of both Windows and Linux, so the mapped drive technique meets my needs, and the technique for doing this is described in the example below. On my PC, I have a share, called \\DadsPCJune2014\Chapter05 , which contains files from one of Ivor Horton’s “Beginning Java” books. Within the Linux subsystem, I create a folder that used to access that share.

sudo mkdir -p /mnt/java_tutorial/chapter05

The Linux mkdir command is very close to the Windows equivalent. The -p option ensures that all folders in the path are created if they do not exist already. The Windows share can now be attached to that directory as follows:

sudo mount -t drvfs '\\DadsPCJune2014\Chapter05' /media/java_tutorial/chapter05/ # Try a directory listing # -l = a line per file # -h = include human readable file size ls /media/java_tutorial/chapter05 -hl What File Types Do We Have?

I worked by mapping the network shares to an explicit drive letter on my secure virtual workstation. Using PowerShell, I can count the files by their extension, most frequent first:

# Powershell Get-Childitem c:\ -Recurse | where { -not $_.PSIsContainer } | group Extension -NoElement | sort count -desc > c:\users\gdpr_dave\count_by_extension.txt

I can perform a similar task using Bash, retrieving the extensions in alphabetic order:

find . -type f|awk -F "/" '{print tolower($NF)}'|grep -F .|sed -n 's/..*\.//p'|sort|uniq -c > $HOME/extension_count.txt

Bash is a little harder, so here’s a brief description of what the above command does:

Find all files below the current location

Use awk to split the file name and path by the / folder separator and output the last part (the file) in lower case

Filter out anything that does not contain a “.” as this indicates the presence of an extension

Replace anything before the “.”

Sort the results

Provide a unique count

The intention is to exclude any files without an extension, such as the multiple small files generated by git.

We can import the resulting files into Excel and cross-reference the entries to https://en.wikipedia.org/wiki/List_of_filename_extensions . By combining the two information sources, we can add a column to indicate whether a particular file type could contain GDPR sensitive data.

How Old Are My Files?

The Windows operating system presents us with three file dates, but these present some challenges when assessing the age of the file.

Date

Description

Created

The date that the file was created at its current location . If I copy a file to a target folder today, then the date and time I performed the copy will be the creation date

Modified

The date that the file was created, or last modified. If I copy the file to another location, then the modified date is retained giving the appearance that a file was modified before it was created.

Last Accessed

By default, this is the same as the Modified date and not the true date and time the file was last accessed. We can switch on the last access tracking, but it is next to useless if the Windows GUI is used as the act of examining the property results in the property being set to the current date/time.

In short, the file dates available through the Windows operating system are flawed, so we have to consider both the created and modified dates, separately.

For the purpose of evaluating file shares, I mapped a share to a drive letter and used two PowerShell modules to find the oldest and youngest file, in each folder within the share.

Get-OldestYoungestItemInfolder.ps1

The module shown below produces tab-delimited output which we can pipe into a file for import into SQL Server using bcp , BULK INSERT or appropriate ETL tool.

The code evaluates the file creation date. If we wanted to use the file modified date, then we would change CreationTime to LastWriteTime .

[cmdletbinding()] param([string]$Foldername='.',[string]$DesiredDate='CreationTime') $file_eariest_date = [DateTime]::MaxValue $file_latest_date = [DateTime]::MinValue $earliest_file_name="" $latest_file_name="" $number_of_files = Get-ChildItem -Path $Foldername -File -Force|Measure-Object|%{$_.Count} $items = Get-ChildItem -Path $Foldername -File -Force|sort $DesiredDate|Select -First 1 -Last 1 $file_eariest_date = $items[0].$DesiredDate $file_latest_date = $items[$items.Count -1].$DesiredDate $earliest_file_name = $items[0].Name $latest_file_name = $items[$items.Count -1].Name Write-Output "$FolderName't$($number_of_files)'t$('{0:yyyy-MM-dd}' -f $file_eariest_date)'t$('{0:yyyy-MM-dd}' -f $file_latest_date)'t$($earliest_file_name)'t$($latest_file_name)" Get-OldestYoungestItemRecursive.ps1

Our second module recurses down through the folder structure, calling our first module for each folder.

[cmdletbinding()] param([string]$Foldername='.',[string]$DesiredDate='CreationTime') Get-ChildItem -Path $Foldername -Directory -Recurse|Where-Object{$_.GetFiles().Count -gt 0}|ForEach-Object{Get-OldestYoungestItemInFolder.ps1 $($_.FullName)}

Let us suppose that we mapped a file share to the M:\ drive then we might pipe output to a file as follows:

Get-OldestYoungestItemRecursive.ps1 M:\ CreationTime >C:\Users\gdpr_auditor\Share_CreationTime.txt Get-OldestYoungestItemRecursive.ps1 M:\ LastWriteTime >C:\Users\gdpr_dave\Share_LastWriteTime.txt

Even these simple lists of folders and dates can reveal opportunities to remove obsolete information.

Where Are My Files?

We know what files we have, and the oldest and youngest file, in any folder. For file types that are likely to hold sensitive data, we must identify where they are held.

For any folder, we want to traverse down through all its subfolders listing those that contain a file with the desired extension. The Get-FileTypeLocation.ps1 PowerShell module below achieves this:

[cmdletbinding()] param([string]$Foldername='.',[string]$FileExtensionPattern='xls') Get-ChildItem $Foldername -Recurse -Include *.$FileExtensionPattern|Sort-Object DirectoryName|%{Write-Output $_.DirectoryName}|Get-Unique -AsString

Given the wide usage of Microsoft Excel across the enterprise, I chose .xls as the default file extension value for which to search. If I pass xl* then I gain a list of directories for XLS, XLM, XLSX, XLSM files, as well.

Using the ‘file share mapped to a drive’ example from earlier in this article, I can pipe the results of my PowerShell module to files:

Get-fileTypeLocation m:\ xl* >C:\Users\gdpr_dave\Share_ExcelLocation.txt Get-fileTypeLocation m:\ acc* >C:\Users\gdpr_dave\Share_AccessLocation.txt Get-fileTypeLocation m:\ mdb >C:\Users\gdpr_dave\Share_OldAccessLocation.txt Get-fileTypeLocation m:\ mde >C:\Users\gdpr_dave\Share_OldAccess2Location.txt

By importing each of these files into SQL Server as separate tables and joining on the folder name, we can get a reasonable approximation of when a folder was last active for file creation or modification, and what types of files are present in those folders.


Exhuming the GDPR Bodies
Which Files Pose a GDPR Risk?

I need to know if my files contain personal names, email addresses, and other sensitive information. Apache Tika provides the mechanism by which we can look inside files, without needing to have the original application used to create those files.

Apache Tika’s origins are as part of Apache Nutc h and Apache Lucene , which are a web crawler and index. For such programs to be most effective, they need to be able to look at the contents of the files and extract the relevant material. In other words, they need:

Visible content

Metadata such as the XIFF metadata in JPEG images.

Apache Tika supports over 1,400 file formats, which include the Microsoft Office suite and many more that are likely to contain the sort of data we need to find.

Running Apache Tika

Apache Tika is a Java application. Version 1.18 requires Java 8 or higher. It can behave as a command line application or as an interactive GUI.

I downloaded tika-app-1.18.jar into c:\code\ApacheTika\ on my workstation, and ran the following command, to use Tika as an interactive application.

java -jar c:\code\ApacheTika\tika-app-1.18.jar

This produces a dialogue as shown below.


Exhuming the GDPR Bodies

The file menu allows a file to be selected and the view menu provides different views of that file. Apache Tika can also be run as a command line application.

java -jar c:\code\ApacheTika\tika-app-1.18.jar -t m:\documents\Blessed_Anthem.docx

The -t switch asked for the output in plain text and resulted in the content shown below:

Mae hen wlad fy nhadau yn annwyl i mi Gwlad beirdd a chantorion enwogion o fri Ei gwrol ryfelwr, gwlad garwyr tra mad Tros ryddid collasant eu gwaed. Gwlad Gwlad, Pleidiol wyf i'm gwlad, Tra mr yn fur i'r bur hoff bau O bydded i'r hen iaith barhau

When running as a command line application, Apache Tika offers more facilities than are present in the interactive GUI. For example, the -l switch attempts to identify the language in the file.

java -jar c:\code\ApacheTika\tika-app-1.18.jar -l m:\documents\Blessed_Anthem.docx

This produces cy which is the ISO639-2 language code for Welsh.

Using Apache Tika to Extract Data from Spreadsheets

Let’s suppose that, prior to GDPR, the marketing manager at AdventureWorks had decided to run an email campaign involving all AdventureWorks customers. He or she had exported two tables from AdventureWorks2014 to an Excel spreadsheet, as shown below.


Exhuming the GDPR Bodies

The command below would extract the content of the sample spreadsheet to a file called output.txt .

java -jar tika-app-1.18.jar -t C:\Users\david.poole\Documents\SQLServerCentral\Red-Gate\GDPR\Tika\Email_Campaign2014.xlsx>output.txt

If we were to open that file in Notepad++, we would see something similar to the following:


Exhuming the GDPR Bodies

The spreadsheet tab names are against the left-hand margin

The spreadsheet rows are indented by one TAB

The spreadsheet columns are TAB-delimited

The spreadsheet rows are terminated by a line-feed, which is the Unix standard

Each sheet terminates with two empty rows.

Apache Tika has successfully extracted the contents from the spreadsheet, so our next task is to determine the steps necessary to identify the fact that it contains the names of people.

Acquiring a Dictionary of First Names

As the security and compliance manager for AdventureWorks, the quickest way to determine if a file contains customer names is to match against a dictionary of first names.

The easiest way to acquire that list of first names is by querying the company customer database. For this I create a view:

CREATE VIEW Person.ForeNameExtraction AS SELECT UPPER(REPLACE(FirstName, '.', ''))AS ForeName FROM AdventureWorks2014.Person.Person WHERE FirstName NOT LIKE '% %' AND LEN(REPLACE(FirstName, '.', '')) > 3 UNION SELECT UPPER(REPLACE(MiddleName, '.', '')) AS ForeName FROM AdventureWorks2014.Person.Person WHERE MiddleName NOT LIKE '% %' AND MiddleName IS NOT NULL AND LEN(REPLACE(MiddleName, '.', '')) > 3; GO

This produces a distinct list of first names that are at least three characters long and do not contain spaces. Without the size qualification, the risk would be that matching against such a list would produce a huge number of false positives.

I would use the query with the SQL Server bcp utility to produce a file of the results.

bcp "SELECT ForeName FROM Person.ForeNameExtraction ORDER BY ForeName" queryout "c:\code\FirstName.txt" -c -T -r0x0A -SMyDbServer

Note that -r0x0A gives an LF character as the row terminator to match the output used by Apache Tika.

Reformatting Apache Tika Output

To aid the matching process, the output from Apache Tika must be reformatted. Either Bash or PowerShell is adequate for the tasks required, as the output from each stage can be piped into the next.

As Windows 10 now supports Linux, and I work in a multi-platform environment, I prefer Bash.

Stage Bash

Change all white space to LF

sed ‘s/\s/\n/g’

Change output to upper case

tr ‘[a-z]’ ‘[A-Z]’

Remove empty lines

sed ‘/^$/d

Sort output in case-insensitive mode

sort -f

When put together, the command line would appear as shown below.

cat output.txt |sed 's/\s/\n/g'|tr '[a-z]' '[A-Z]'|sed '/^$/d'|sort -f >sorted_output.txt Matching Apache Tika Output to the Dictionary of First Names

Both the Apache Tika sorted_output.txt and our FirstName.txt file share the following characteristics.

Output is in upper case

Output is sorted in alphabetic sequence

Record terminators are an LF character

This allows me to use the bash join command, and then wc to count the number of matches.

join -1 1 -2 1 <(cat FirstName.txt) <(cat sorted_output.txt )|wc -l

However, this did produce an error as well as a count:

join: /dev/fd/63:405: is not sorted: J PHILLIP 23826

The error is due to differences between the character sets, and the way that the database sorts its records. This can be fixed by piping the output for both through the sort utility:

join -1 1 -2 1 <(cat FirstName.txt|sort -f) <(cat sorted_output.txt|sort -f )|wc -l

This produces a count of 23,850, which clearly indicates that a lot of first names have been found.

We can compare this to the of the original number of lines output by Apache Tika, which is 39,952.

cat output.txt |wc -l

I could also count the number of unique name matches against my dictionary:

join -1 1 -2 1 <(cat FirstName.txt|sort -f) <(cat sorted_output.txt|sort -f)|uniq|wc -l Matching Apache Tika Output to a RegEx Pattern

We can also use the egrep utility to search for text patterns that could be email addresses.

cat sorted_output.txt |egrep "^[A-Za-z0-9]+@[A-Za-z0-9-]+\."|wc -l

If we wanted to search for a pattern matching a UK postal code, then at the point where we converted all white space to new-lines characters, we would have to be explicit in converting just tab characters. This would be to avoid splitting the first and second half of the postal code.

Running the Tika Program

All the steps described so far can be assembled into a simple bash script, CheckFileWithTika.sh , with minor error checking.

#!/bin/bash export LC_ALL='C' # Forces all programs to output using the default language and use the same bytewise sort FileToCheck=$1 # It is easier to read code with named arguments SortedOutputFileName=sorted_output_$(uuidgen).txt # Make a filename unique to this run if [ $# -eq 0 ]; then echo -e "\033[31mERROR: NO ARGUMENT SUPPLIED: \033[93mExpected a fully qualified filename\033[0m" exit fi if [ ! -f $FileToCheck ]; then echo -e "\033[31mERROR File \033[93m$FileToCheck \033[31mdoes not exist\033[0m" exit fi java -jar tika-app-1.18.jar -t $FileToCheck|sed 's/\s/\n/g'|tr '[a-z]' '[A-Z]'|sed '/^$/d'|sort -f >$SortedOutputFileName fore_name_score=$(join -1 1 -2 1 <(cat FirstName.txt) <(cat $SortedOutputFileName )|wc -l) email_score=$(cat $SortedOutputFileName |egrep "^[A-Za-z0-9]+@[A-Za-z0-9-]+\."|wc -l) echo -e "\033[31mFile $FileToCheck matches $fore_name_score fore names and $email_score email addresses\
          Oracle DBA - EBS Databases. - Alquemy- Toronto,ON - Toronto, ON      Cache   Translate Page      
UNIX shell scripts,:. Experience supporting Oracle Applications versions R12, Oracle Database, SQL\*PLUS, PL/SQL, UNIX shell scripts, Apache, OC4J....
From Indeed - Wed, 12 Sep 2018 20:48:27 GMT - View all Toronto, ON jobs
          Dubbo 2.6.3 正式版发布,分布式 RPC 服务框架      Cache   Translate Page      

Dubbo 2.6.3 正式版发布了,带来了功能增强、新特性、bug 修复、性能优化和 Hessian-lite。

功能增强/新特性

  • Support implicit delivery of attachments from provider to consumer, #889

  • Support inject Spring bean to SPI by bean type, #1837

  • Add generic invoke and attachments support for http&hessian protocol, #1827

  • Get the real methodname to support consistenthash for generic invoke, #1872

  • Remove validation key from provider url on Consumer side, config depedently, #1386

  • Introducing the Bootstrap module as a unified entry for Dubbo startup and resource destruction, #1665

  • Open TCP_NODELAY on Netty 3, #1746

  • Graceful shutdown optimization (unified to the lifecycle to Servlet or Spring), #1820#2126

  • Enable thread pool on Consumer side, #2013

  • Support specify proxy type on provide side, #1873

  • Support dbindex in redis, #1831

  • Upgrade tomcat to 8.5.31, #1781

性能优化

  • ChannelState branch prediction optimization. #1643

  • Optimize AtomicPositiveInteger, less memory and compute cost, #348

  • Introduce embeded Threadlocal to replace the JDK implementation, #1745

    详情请查看 https://github.com/apache/incubator-dubbo/releases/tag/dubbo-2.6.3


              Network Administrator - thinktum Inc. - North York, ON      Cache   Translate Page      
    Experience working within a Windows, Linux, Tomcat and/or Apache environment and enterprise level networking hardware....
    From Indeed - Thu, 13 Sep 2018 15:46:31 GMT - View all North York, ON jobs
              Technical Writer II - Alexa - Amazon.com - Seattle, WA      Cache   Translate Page      
    Frameworks and Infrastructure with tools like Apache MXNet and TensorFlow, API-driven Services like Amazon Lex, Amazon Polly and Amazon Rekognition to quickly...
    From Amazon.com - Mon, 13 Aug 2018 19:22:01 GMT - View all Seattle, WA jobs
              Junior Software Engineer - Leidos - Morgantown, WV      Cache   Translate Page      
    Familiarity with NoSql databases (Apache Accumulo, MongoDB, etc.). Put your Java/C++ skills in action!...
    From Leidos - Wed, 18 Jul 2018 12:37:24 GMT - View all Morgantown, WV jobs
              Network Administrator - thinktum Inc. - North York, ON      Cache   Translate Page      
    Experience working within a Windows, Linux, Tomcat and/or Apache environment and enterprise level networking hardware....
    From Indeed - Thu, 13 Sep 2018 15:46:31 GMT - View all North York, ON jobs
              Fleetwood Mallard -- 2007 Mallard 26ft Open Floor Plan      Cache   Translate Page      
    Fleetwood Mallard -- 2007 Mallard 26ft Open Floor Plan

    Fleetwood Mallard -- 2007 Mallard 26ft Open Floor Plan

    Price : $ 9,500
    Category : Other Makes
    Condition : Used


    Location: 85120, Apache Junction,AZ,USA

    Visit listing »


              Как установить локальный серер MAMP на Mac OS X?      Cache   Translate Page      
    Вам нужен локальный сервер на mac os x? Здесь балом правит mamp (расшифровывается как mac, apache, mysql, php). Есть и платная pro версия. О разнице можете узнать, перейдя по этой ссылке (сабжа больше нет, потому скопировал в google docs) на страницу «mamp vs...
              New 1/72 Helicopters from Altaya      Cache   Translate Page      
    AH-64A Apache, U.S. Army (1:72) (http://www.diecastairplane.com/store/p/304268-AH-64A-Apache,-U.S.-Army-1-72.html?a=diecastemail) Agusta SH-3D...
              quipo/kafka-php      Cache   Translate Page      
    PHP client library for Apache Kafka 0.7
              Autentificacion Udpxy a traves de apache      Cache   Translate Page      

    Autentificacion Udpxy a traves de apache

    Hola soy novato en linux
    Y muy novato en Apache.
    He visto en una web una persona que ha conseguido a traves de apache poner un usuario y una contraseña para poder utilizar udpxy
    Y lo ha hecho añadiendo al Vhost lo siguiente

    <Location /udp> ProxyPass http://192.168.2.21:4022/udp ProxyPassReverse http://192.168.2.21:4022/udp AuthType Basic AuthName &qu...

    Publicado el 12 de Septiembre del 2018 por gedas07

              Technical Writer II - Alexa - Amazon.com - Seattle, WA      Cache   Translate Page      
    Frameworks and Infrastructure with tools like Apache MXNet and TensorFlow, API-driven Services like Amazon Lex, Amazon Polly and Amazon Rekognition to quickly...
    From Amazon.com - Mon, 13 Aug 2018 19:22:01 GMT - View all Seattle, WA jobs
              htaccess Expert in Apache & PHP      Cache   Translate Page      
    htaccess Expert in Apache & PHP to Optimize website (Budget: $10 - $30 USD, Jobs: Apache, Linux, PHP)
              原 荐 在php的yii2框架中整合hbase库      Cache   Translate Page      
    penngo的空间 php

    正文


    原 荐 在php的yii2框架中整合hbase库

    在php的yii2框架中整合hbase库


    原 荐 在php的yii2框架中整合hbase库
    penngo 发布于 今天 16:59

    字数 592

    阅读 3

    收藏 0

    PHP Yii Apache HBase Thrift

    开源中国十周年庆:开源众包怎么做我说了算!参与赢终身免费大奖 >>>
    原 荐 在php的yii2框架中整合hbase库

    Hbase通过thrift这个跨语言的RPC框架提供多语言的调用。

    Hbase有两套thrift接口(thrift1和thrift2),但是它们并不兼容。根据官方文档,thrift1很可能被抛弃,本文以thrift2整合为例。

    1、访问官网http://thrift.apache.org/download,下载

    thrift-0.11.0.exe (生成接口rpc工具,thrift-0.11.0.exe改名thrift.exe,保存在D:\project\thrift\thrift.exe)
    thrift-0.11.0.tar.gz(thrift相关库,保存在D:\project\thrift\thrift-0.11.0)

    2、访问hbase官网(http://archive.apache.org/dist/hbase/),下载hbase-1.2.6-src.tar.gz

    解压保存在D:\project\thrift\hbase-1.2.6

    3、生成php接口代码

    解压hbase-1.2.6-src.tar.gz,hbase-1.2.6\hbase-thrift\src\main\resources\org\apache\hadoop\hbase文件夹同时存在thrift和thrift2接口描述文件,本文只使用thrift2

    在D:\project\thrift目录中输入cmd命令,生成对应php的sdk文件。

    thrift -gen php hbase-1.2.6\hbase-thrift\src\main\resources\org\apache\hadoop\hbase\thrift2\hbase.thrift

    生成的D:\project\thrift\gen-php目录包含文件:

    THBaseService.php
    Types.php

    4、要通过thrifc调用hbase,需要先启动hbase的接口服务

    $HBASE_HOME/bin/hbase-daemon.sh start thrift2 //启动
    $HBASE_HOME/bin/hbase-daemon.sh stop thrift2 //停止

    5、与yii2整合

    在vendor文件夹中新建hbase目录

    vendor\hbase\gen-php //复制D:\project\thrift\gen-php
    vendor\hbase\php //复制D:\project\thrift\thrift-0.11.0\lib\php

    由于thrift2的php不使用Composer,类库命名方式也不完全符合PSR-4标准, 所以本文使用include_path方式来定位并导入类文件。

    common\models\HArticle.php

    <?php
    namespace common\models;
    require_once dirname(dirname(__DIR__)).'/vendor/hbase/php/lib/Thrift/ClassLoader/ThriftClassLoader.php';
    use Thrift\ClassLoader\ThriftClassLoader;
    $loader = new ThriftClassLoader();
    $loader->registerNamespace('Thrift', dirname(dirname(__DIR__)) . '/vendor/hbase/php/lib');
    $loader->register();
    require_once dirname(dirname(__DIR__)) . '/vendor/hbase/gen-php/Types.php';
    require_once dirname(dirname(__DIR__)) . '/vendor/hbase/gen-php/THBaseService.php';
    use Thrift\Protocol\TBinaryProtocol;
    use Thrift\Transport\TSocket;
    use Thrift\Transport\TBufferedTransport;
    use THBaseServiceClient;
    use TGet;
    class HArticle
    {
    private $host = '192.168.1.108';
    private $port = 9090;
    public function get($rowKey){
    $socket = new TSocket($this->host, $this->port);
    $transport = new TBufferedTransport($socket);
    $protocol = new TBinaryProtocol($transport);
    $client = new THBaseServiceClient($protocol);
    $transport->open();
    $tableName = "article_2018";
    $get = new TGet();
    $get->row = $rowKey;
    $arr = $client->get($tableName, $get);
    $data = array();
    $results = $arr->columnValues;
    foreach($results as $result)
    {
    $qualifier = (string)$result->qualifier;
    $value = $result->value;
    $data[$qualifier] = $value;
    }
    $transport->close();
    return $data;
    }
    }

    frontend\controllers\TestController.php

    <?php
    namespace frontend\controllers;
    use yii\web\Controller;
    use common\models\HArticle;
    class TestController extends Controller
    {
    public function actionIndex()
    {
    $hArticle = new HArticle();
    $data = $hbaseNews->get('20180908_1f1be3cd26a36e351175015f450fa3f6');
    var_dump($data);
    exit();
    }
    }

    著作权归作者所有

    共有人打赏支持


    原 荐 在php的yii2框架中整合hbase库
    penngo

    粉丝 76

    博文 99

    码字总数 55704

    作品 2

    广州

    程序员

    相关文章 最新文章

    R语言与Hadoop和Hbase的联合使用

    HBase和rhbase的安装与使用,分为3个章节。 1. 环境准备及HBase安装2. rhbase安装3. rhbase程序用例 每一章节,都会分为”文字说明部分”和”代码部分”,保持文字说明与代码的连贯性。 注:...

    openthings

    2015/07/02

    0

    0

    使用hive读取hbase数据

    Hive是基于Hadoop的一个数据仓库工具,可以将结构化的数据文件映射为一张数据库表,并提供完整的sql查询功能,可以将sql语句转换为 MapReduce任务进行运行。 其优点是学习成本低,可以通过类...

    凡16

    2013/12/13

    0

    0

    hive 与 hbase 结合

    一、hive与hbase的结合 Hive会经常和Hbase结合使用,把Hbase作为Hive的存储路径,所以Hive整合Hbase尤其重要。使用Hive读取Hbase中的数据,可以使用HQL语句在HBase表上进行查询、插入操作;甚...

    meteor_hy

    06/26

    0

    0

    手把手教你通过Thrift 访问ApsaraDB for HBase

    Thrift 多语言接入 Thrift 提供多语言访问HBase的能力,支持的语言包从Thrift官网看括: C++, Java, python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, javascript, Node.js, Smalltalk,......

    玄陵

    08/03

    0

    0


    原 荐 在php的yii2框架中整合hbase库
    hive与hbase数据交互的详解指南 | ApacheCN(apache中文网)

    HBase和Hive的集成原理 ApacheCN | apache中文网 Hive和Hbase有各自不同的特征:hive是高延迟、结构化和面向分析的,hbase是低延迟、非结构化和面向编程的。Hive数据仓库在hadoop上是高延迟...

    片刻

    2014/06/28

    0

    0

    没有更多内容

    加载失败,请刷新页面

    加载更多 红黑树

    在看jdk的HashMap的代码的时候,看到了jdk8的实现方式用到了红黑树,然后,就看了一下。 废话少讲,开始红黑树的简介。 红黑树的特性 1.每个节点或者是黑色,或者是红色。 2.根节点是黑色。 ...

    无知的小狼

    15分钟前

    1

    0

    jvisualvm远程监控 visualgc插件 不受此jvm支持问题

    问题描述:VisualVM连接远程服务器有两种方式:JMX和jstatd,两种方式都不能完美支持所有功能,例如JMX不支持VisualGC,jstatd不支持CPU监控,实际使用可同时配置上并按需选用。 下面介绍如何...


              OpenSense Labs: Storing the Data: Drupal as a Central Content Repository      Cache   Translate Page      
    Storing the Data: Drupal as a Central Content Repository Akshita Thu, 09/13/2018 - 19:52

    The journey from a visitor to the client doesn’t happen overnight nor over a single screen. 

    It is unfair on the part of organizations to assume that all readers will be using the same screen to consume their content. 

    In case organizations are working towards targeting various visitors it is important to have a durable and centralised content dissemination platform to serve digital content through various screens.

    Therefore it is equally important that various mediums ensure a smoother journey and the backend - content repository - provides a seamless translation of information to various touchpoints. 

    What is a Content Repository?

    “A content repository is a database of (digital) content with an associated set of data management, search and access methods allowing various application-independent access to the content with the ability to store and modify content.” 

    The content repository acts as the storage engine for a larger application such as a CMS which adds a user interface on top of each of the repository's application user interface.

    a diagram with a heptagon in middle with 7 boxes connected to it

    The proliferation of content across a variety of sources can create an enormous business challenge. As the unstructured content grows, organizations need to look for a flexible approach that supports interoperability with a wide array of popular systems and products. 

    A robust central content management repository should store a variety of content formats, facilitate read/write capabilities, control access. 

    Here are some of the features of a content repository:

    • Efficient storage to integrate content
    • Query and Search 
    • Versioning 
    • Import/export the content 
    • Editor accessibility for all the documents and the content. 
    • Records retention management systems
    • Document capture systems and document-imaging (Complementary systems) 

    Difference between a Content Repository and CMS
    A content management system manages the creation and modification of digital content and typically supports multiple users in a collaborative environment.
    While a content repository disseminates the content to be shared between disparate websites, different kinds of devices or channels such as mobile phones, tablets, kiosks, Facebook or syndicated via an API.

    How Does a Content Repository Work?

    A central content repository allows the content editors to edit content directly from the backend of one site. The content editors simply choose the topics that the content belongs to, and the sites subscribe to those topics and it is then available to all the connected sites automatically. 

    a flow chart with five blocks in it

    A Content Repository Workflow works like this:

    Content creation of a topic happens on Site A.

    • The content is shared via a central content repository.
    • Site B is subscribed (sync rules) to receive updates whenever the content for the same topic is created.
    • Site B, C, D receive the notification and pull in the content. 
    • If any user on site C searches for the new content published through site A, she will get it through the content repository.

    Drupal 8 is well suited to act as a central content repository, as it has built-in support for REST web services, a serialisation component, and can be configured to work with publishing workflows and notifications.

    Search web service such as Apache Solr or ElasticSearch can best provide a lookup service for each site. Rather than subscribing to a particular topic, content editors can simply search for the content they wish to import from.

    Application of Drupal as a Central Content Repository

    • Content management
    • Document management
    • Digital asset management
    • Records management
    • Revision control
    • Social collaboration
    • Web content management

    Building Consumer Experience with a Central Content Repository

    Content is not only the material you use to develop your CXM strategies—it’s also the interactions between customers and prospective customers have with you. Talking about the online customer experience, a CMS is part of the process of designing and supporting CX strategies. 

    Simply because it stores all the content you need to manage the experience. However, customer experience management is about more than the online channels. 

    In order to successfully manage the customer experience, the CMS needs to be able to quickly access and react to the elements of a customer interaction. Not just this, the elements should be accessible to the editors as well. 

    Managing every single version of the web pages is a heck of a job and ensuring that the content looks just the same is another fight. 

    Most, if not all, CMSs are designed to store content not just as HTML pages, but as individual components that can be easily reused across different web pages. Across various devices, mobile sites and apps, and social networks.

    In this way, the content repositories can be leveraged to provide content as well. 

    Content integration is the key to a well-managed content repository. Managing the content by integrating it with all the other systems. 

    A central content repository also allows you to develop the support applications that have access to customer information easily, including information from CRM systems, traffic information, and the like.

    Having it all accessible in a centralized content repository will help you identify, design, and refine your CX strategies quickly.

    Building a Central Content Repository for FarmJournal Media 

    For Farm Journal Media, OpenSense Labs have implemented a similar centralised content management system. 

    Technologies Used 

    • Express.js 
    • MongoDB 
    • Drupal 8 

    How Did It Work?

    Express.js- node.js framework provided a library of many pre-built functions which were leveraged for the CCMS. 

    It allowed simultaneous access to multiple authors without compromising on speed. This could be done by leveraging its events loop based asynchronous task handling. 

    The interface to serve content was developed via MongoDB. The system triggered updates of content from CCMS to MongoDB asynchronously and in real time. This ensured the cron jobs do not overload the sites as the webhook request will be triggered only when required. 

    Due to this layered architecture, the overall content journey once the editor hits save, to consumer site was at max 3 seconds.  

    An increase in consumer sites, update count and pull requests do not affect the load on CCMS Drupal. 

    A special fail handler was built to sanity check between CCMS, Mongo and consumer sites. This ensured there was no duplicity and maintain an error log for missing articles during the journey it takes with an exact failure points reported. 

    homepage of greenbook with various blocks
    One of the sites of FarmJournal

    How Did the CCMS Worked?

    It allowed the team of editors to:

    • Centrally manage the content through one platform
    • Cross-publish articles on full networks of FarmJournal sites
    • Use a simple site vs category mapping for automated syndication of articles. 
    • Have a centralised reporting to boost the editorial teams’ productivity & article publication pace. 

    The Scope of Building a Content Repository in Drupal

    Coupled CMS (with supporting API)

    A traditional website on Drupal allows content editors to add or edit content with a preview for the content as well. This is because a traditional CMS is tied (or coupled) to a front end (which is the case with Drupal).

    Taking the front end out of the equation can bear its own challenges.  

    The front end is what a user sees when viewing an application, which, in Drupal’s primary case, is a website. 

    Content editors can view the content before it’s published using a wide array of tools such as inline editing or pre-published previews. 

    Available modules in Drupal allow for quick and relatively easy modification to how the data is displayed on the frontend. Developers aren’t always needed to make simple changes, which can be more efficient for both time and cost, possibly a huge benefit to using a coupled CMS.

    Drupal 8 has a strong emphasis on providing many API services out of the box, and there is a strong push for the API-first approach.

    Headless CMS (the API-only approach)

    With API-first Initiative at the forefront, Drupal 8.0 was shipped with a built-in REST API which spelt the beginning of Drupal’s transformation as an API-first platform.

    A headless CMS often confused with a decoupled CMS is considered an API-only approach. 

    It provides a hub for the content sans any frontend. 

    The backend allows content editors to publish content distributing it automatically to any integrated application. Since there is no coupled frontend interface to immediately view the data applications such as Digital signage need be developed and integrated in order to access this content. 

    In such a scenario trialing and proofing content before publishing can be difficult. Another challenge is the layout which can be a limitation to the marketing teams. 

    The Drupal community has already taken steps towards making sure Drupal continues to be a relevant contender as either a coupled OR headless CMS.

    Drupal distribution Open Y can be used to build such applications for a Digital Signage.

    Drupal Distribution Contenta can be used as an API to connect the backend of Drupal with any application. 

    Conclusion

    Previously unstructured and inaccessible content comes alive in digital business applications that engage customers, automate business processes, enhance collaboration and govern and protect content throughout its lifecycle. 

    Content management services and solutions from OpenSense Labs support your digital transformation and help you build a cognitive business that is confident, efficient and competitive. Drop a mail at hello@opensenselabs.com.  

    blog banner
    Files stacked in two columns
    blog image
    colorful bags stacked in three rows
    Blog Type
    Is it a good read ?
    On

              My New Best Friend...      Cache   Translate Page      

    My New Best Friend is a wonder and an awesome fella. Sheriff Garrett. I also have really taken to his Girlfriend that is a jewel. Both are exceptional people. She's Apache. Holds to old things and ways.

    He is the very most Alive man that I have ever met. Outdoors man.Raised on a big ranch. Handy at all ranch things, and knows cowboy ways. Honest. Truthful. NO EGO. Humble man. No pretense. Just straight talk, and has had adventures he tells. Most amazing. He is a full entertainment all by hisself.

    He tells of his ranch adventures and his hunting and backpacking ones too. A magical sort of person. Fully a man's man.

    They love my place. Welcome anytime. They spent some time here, and it has been a super pleasure.

    Very conscious. Intelligent. Super smart in wisdom that is simple, but true. He treats his girlfriend with gentle kindness and attention, and is an old fashioned gentleman in every way. Manners? Oh yes.

    Like I said, in all my life, I have Never met a man so very alive.

    He's active. Well equipted to do feats that most won't think to do.

    I am glad we are friends. She is quiet. He is a spellbinding storyteller. They are a perfect match.

    I don't often meet a whole lot of new folks...being a caregiver,and people do Not like to be around an invalid with mental problems. Makes them uncomfortable.

    This is a good thing. He visited with the companion and even got him talk and smile. I sure appreciated that.

    Nancy


              Modernisering Nederlandse Apache vloot nabij      Cache   Translate Page      

    Staatssecretaris Barbara Visser zal vrijdag 14 september, samen met Michèle Hizon, uitvoerend directeur voor militaire samenwerking, een contract tekenen om de Nederlandse Apache vloot te moderniseren, zo meldt Defensie. Startschot De ondertekening is het officiële startschot voor de modernisering van de 28 Apache gevechtshelikopters van de Koninklijke Luchtmacht. De ondertekening vindt plaats op vliegbasis Gilze-Rijen …

    Het artikel Modernisering Nederlandse Apache vloot nabij is gepubliceerd op Up in the Sky.


              How to Make Artificial Intelligence Explainable      Cache   Translate Page      

    How to Make Artificial Intelligence Explainable

    How to Make Artificial Intelligence Explainable - A New Analytic Workbench

    FICO today announced the latest version of FICO® Analytics Workbench™, a cloud-based advanced analytics development environment that empowers business users and data scientists with sophisticated, yet easy-to-use, data exploration, visual data wrangling, decision strategy design and machine learning.

    As new data privacy regulations shine a spotlight on AI and machine learning, the FICO Analytics Workbench xAI Toolkit helps data scientists better understand the machine learning models behind AI-derived decisions.

    “As businesses depend on machine learning models more and more, explanation is critical, particularly in the way that AI-derived decisions impact consumers,” said Jari Koister, vice president of product management at FICO. “Leveraging our more than 60 years of experience in analytics and more than 100 patents filed in machine learning, we are excited at opening up the machine learning black box and making AI explainable. With Analytics Workbench, our customers can gain the insights and transparency needed to support their AI-based decisions.”

    How to Make Artificial Intelligence Explainable - Avoiding the Common Pitfalls

    “Computers are increasingly a more important part of our lives, and automation is just going to improve over time, so it’s increasingly important to know why these complicated AI and ML systems are making the decisions that they are,” said AI expert and assistant professor of computer science at the University of California Irvine, Sameer Singh. “The more accurate the algorithm, the harder it is to interpret, especially with deep learning. Explanations are important, they can help non-experts to understand the reasons behind the AI decisions, and help avoid common pitfalls of machine learning.”

    Built for both business users and data scientists, the FICO® Analytics Workbench™ combines the best elements of FICO’s existing data science tools with several open source technologies, into a single, cloud-ready, machine learning and decision science toolkit, powered by scalable Apache Spark technologies. The Analytics Workbench provides seamless and automated regulatory audit compliance support, producing the necessary documentation for internal review and external regulators.

    The Analytics Workbench has been designed for users with a variety of skill sets, from credit risk officers looking for a consistent tool to data scientists and business analysts collaborating and working together to inform and enrich strategic decision making. With an intuitive interface, users can expect faster time-to-value, higher levels of productivity, and significant business improvements through analytically powered decisions.

    How to Make Artificial Intelligence Explainable - See Our Demo

    For more information, click here.

    For a demonstration of analytics workbench, click here.

    The post How to Make Artificial Intelligence Explainable appeared first on FICO.


              Apache Co. (NYSE:APA) VP Sells $152,145.60 in Stock      Cache   Translate Page      
    Apache Co. (NYSE:APA) VP Dominic Ricotta sold 3,480 shares of the firm’s stock in a transaction on Monday, August 27th. The shares were sold at an average price of $43.72, for a total transaction of $152,145.60. Following the sale, the vice president now owns 10,357 shares in the company, valued at approximately $452,808.04. The transaction […]
              JetBrains DataGrip 2018.2.3      Cache   Translate Page      

    JetBrains DataGrip 2018.2.3
    JetBrains DataGrip 2018.2.3 | 155 MB

    DataGrip is the multi-engine database environment. We support MySQL, PostgreSQL, Microsoft SQL Server, Oracle, Sybase, DB2, SQLite, HyperSQL, Apache Derby and H2. If the DBMS has a JDBC driver you can connect to it via DataGrip. For any of supported engines it provides database introspection and various instruments foaedit, and clone data rows. Navigate through the data by foreign keys and use the text search to find anything in the data displayed in the table editor.


              AWS and Security Implementation expert      Cache   Translate Page      
    Our tech stack is Mongo, Nodejs, ExpressJS and AngularJS and socket.io Knowledge on Linux , Apache/Nginx Need extensive experience AWS tools like Autoscaling, route53, ec2 , lambda, s3, beanstalk , Mongo... (Budget: $30 - $250 USD, Jobs: Amazon Web Services, Linux, Nginx, node.js, System Admin)
              Listos para la llegada de lluvias, en Álamo Temapache; calles y drenajes en trabajos de desazolve      Cache   Translate Page      
    El presidente Jorge Vera Hernández firmó ...

              Senior Application Development Specialist      Cache   Translate Page      
    CT-Bloomfield, Bloomfield, Connecticut Skills : • Proficiency in Java programming and 6 to 10 years’ experience developing service oriented architectures and designs especially on Red Hat JBoss Fuse platform. • Experience working on Open source technology stacks like Apache Camel, Fuse, CXF, Spring, Blueprint, Groovy and MyBatis. • Experience working on messaging platforms MQ, AMQ and Kafka • Proficient in Oracl
              Emmelie Scholtens: ‘Ik baal van dat begin!’      Cache   Translate Page      
    Apache is nou eenmaal Apache. Daarom had Emmelie Scholtens met de ringmeester geregeld dat zij voor haar Grand Prix pas de ring in zou hoeven als de vorige deelnemer er al uit was. “Maar op het laatst liep het toch...
              Chown apache shows error "invalid user"      Cache   Translate Page      
    I'm working with an Ubuntu EC2 instance that was setup by someone else, so I'm not clear on what has been done in the past. WordPress is running in document root /var/www/html/foldername/
    ...
              Looking For A Dependable House Cleaner For Family Living In Apache. - Care.com - Apache, OK      Cache   Translate Page      
    Bathroom Cleaning, Kitchen Cleaning, General Room Cleaning, Laundry, Organization. I have dogs- small dogs....
    From Care.com - Mon, 10 Sep 2018 13:52:42 GMT - View all Apache, OK jobs
              Apartment Manager - SOUTHRIDGE APARTMENTS - Apache, OK      Cache   Translate Page      
    Must have an understanding of how a budget works and how to stay within the budget. Manager performs monthly inspections in ALL units....
    From Indeed - Wed, 12 Sep 2018 01:51:41 GMT - View all Apache, OK jobs
              Local Coordinator - Great Wall China Adoption - Apache, OK      Cache   Translate Page      
    This is a part-time, contract position, meaning you will work on your own schedule and are compensated for the number of students you place....
    From Indeed - Tue, 28 Aug 2018 12:54:01 GMT - View all Apache, OK jobs
              LEAD SALES ASSOCIATE-FT in APACHE, OK - Dollar General - Apache, OK      Cache   Translate Page      
    Occasional or regular driving/providing own transportation to make bank deposits, attend management meetings and travel to other Dollar General stores....
    From Dollar General - Mon, 27 Aug 2018 10:49:40 GMT - View all Apache, OK jobs
              Volunteer Coach-All Sports JobID: 8333 - Peoria Unified School District - Apache, OK      Cache   Translate Page      
    Must be able to obtain and maintain an Arizona driver’s license as driving students to and from competitions may be required....
    From Peoria Unified School District - Thu, 16 Aug 2018 18:38:35 GMT - View all Apache, OK jobs
              Boys Football Coach JobID: 8324 - Peoria Unified School District - Apache, OK      Cache   Translate Page      
    Must be able to obtain and maintain an Arizona driver’s license as driving students to and from competitions may be required....
    From Peoria Unified School District - Tue, 14 Aug 2018 18:39:54 GMT - View all Apache, OK jobs
              Assistant Athletic Coach-All Sports, All Levels JobID: 8323 - Peoria Unified School District - Apache, OK      Cache   Translate Page      
    Must be able to obtain and maintain an Arizona driver’s license as driving students to and from competitions may be required....
    From Peoria Unified School District - Tue, 14 Aug 2018 18:39:51 GMT - View all Apache, OK jobs
              Special Education Assistant JobID: 8235 - Peoria Unified School District - Apache, OK      Cache   Translate Page      
    Must be able to obtain and maintain an Arizona driver’s license. Knowledge of applicable Federal, state, county and city statutes, rules, policies and...
    From Peoria Unified School District - Tue, 07 Aug 2018 18:38:45 GMT - View all Apache, OK jobs
              Full Time Crossing Guard JobID: 8234 - Peoria Unified School District - Apache, OK      Cache   Translate Page      
    May perform a variety of clerical or receptionist office duties. Knowledge of applicable Federal, state, county and city statutes, rules, regulations,...
    From Peoria Unified School District - Tue, 07 Aug 2018 18:38:42 GMT - View all Apache, OK jobs
              Streaming Movie Jurassic World: Fallen Kingdom (2018)      Cache   Translate Page      

    Jurassic World: Fallen Kingdom (2018) HD Director : J. A. Bayona. Writer : Derek Connolly, Colin Trevorrow. Producer : Patrick Crowley, Belén Atienza, Thomas Hayslip, Frank Marshall. Release : June 6, 2018 Country : United States of America. Production Company : Amblin Entertainment, Apaches Entertainment, Legendary Entertainment, Universal Pictures, Perfect World Pictures, The Kennedy/Marshall Company. […]

    The post Streaming Movie Jurassic World: Fallen Kingdom (2018) appeared first on Gainesville Video Production | 352-507-7033.


              京东金融数据分析案例(一)      Cache   Translate Page      

    版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/mingyunxiaohai/article/details/82428664

    数据说明:

    给定的数据为业务情景数据,所有数据均已进了采样和脱敏处理,字段取值与分布均与真实业务数据不同。提供了时间为 2016-08-03 到 2016-11-30 期间,用户在移动端的行为数据、购物记录和历史借贷信息,及 11 月的总借款金额。 数据集下载地址为:链接: https://pan.baidu.com/s/1hk8hARHxkQcMS8SgABmcHQ 密码: fc7z

    文件包括user.csv,order.cav,click.csv,loan.csv,loan_sum.csv。

    前言:一般的大数据项目一般都分为两种,一种是批处理一种是流式处理,此次练习批处理使用hive和presto处理,流式处理使用SparkStreaming+kafka来处理

    任务 1

    一般情况下我们的user的数据都是存在自己的关系型数据库中,所以这里将 t_user 用户信息到 mysql 中,我们在从MySQL中将其导入到hdfs上,其他三个文件及,t_click,t_loan 和 t_loan_sum 直接导入到 HDFS 中。

    mysql自带csv导入功能所以

    先创建数据库和user表

    create database jd use jd create table t_user (uid INT NOT NULL, age INT, sex INT, active_date varchar(40), initial varchar(40));

    导入数据

    LOAD DATA LOCAL INFILE '/home/chs/Documents/t_user.csv' INTO TABLE t_user CHARACTER SET utf8 FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n' IGNORE 1 ROWS; 任务 2

    利用 Sqoop 将 MySQL 中的 t_user 表导入到 HDFS 中

    显示有哪些数据库

    sqoop list-databases --connect jdbc:mysql://master:3306 --username root --password '' //显示下面的几个数据库 information_schema jd mysql performance_schema

    显示有哪些表

    sqoop list-tables --connect jdbc:mysql://master:3306/jd --username root --password '' //这里只有一张表 t_user

    使用sqoop把MySQL中表t_user数据导入到hdfs的/data/sq目录下

    sqoop import --connect jdbc:mysql://master:3306/jd --username root --password '' --table t_user --target-dir /data/sq 出错了 18/08/21 13:44:26 ERROR tool.ImportTool: Import failed: No primary key could be found for table t_user. Please specify one with --split-by or perform a sequential import with '-m 1'.

    说是这个表中没有主键。我们可以建表的时候给它设置上主键,也可以使用下面 split-by来指定使用哪个字段分割

    sqoop import --connect jdbc:mysql://master:3306/jd --username root --password '' --table t_user --target-dir /data/sq --split-by 'uid' 又出错了 Host 'slave' is not allowed to connect to this MySQL server Host 'slave2' is not allowed to connect to this MySQL server

    错误原因 ,因为我这里的hadoop集群使用了3台虚拟机,slave和slave2没有使用root用户访问MySQL的权限

    进入mysql的控制台

    use mysql

    select host,user,password from user;

    +-----------+------+----------+ | host | user | password | +-----------+------+----------+ | localhost | root | | | master | root | | | 127.0.0.1 | root | | | ::1 | root | | | localhost | | | | master | | | +-----------+------+----------+

    可以看到现在只有master有权限,给slave和slave2也设置权限

    grant all PRIVILEGES on jd.* to root@'slave' identified by ''; grant all PRIVILEGES on jd.* to root@'slave2' identified by '';

    这才执行OK

    查看导入后的hdfs上的目录

    hdfs dfs -ls /data/sq

    -rw-r--r-- 1 chs supergroup 0 2018-08-21 14:06 /data/sq/_SUCCESS -rw-r--r-- 1 chs supergroup 807822 2018-08-21 14:06 /data/sq/part-m-00000 -rw-r--r-- 1 chs supergroup 818928 2018-08-21 14:06 /data/sq/part-m-00001 -rw-r--r-- 1 chs supergroup 818928 2018-08-21 14:06 /data/sq/part-m-00002 -rw-r--r-- 1 chs supergroup 818964 2018-08-21 14:06 /data/sq/part-m-00003

    查看每一部分的数据

    hdfs dfs -cat /data/sq

    17107,30,1,2016-02-13,5.9746772897 11272,25,1,2016-02-17,5.9746772897 14712,25,1,2016-01-10,6.1534138563 16152,30,1,2016-02-10,5.9746772897 10005,30,1,2015-12-17,5.7227683627 ......

    OK导入完成 剩下的几个CSV文件直接功过hadoop的put命令上传到hdfs上对应的目录即可。

    任务 3

    利用 Presto 分析产生以下结果,并通过 web 方式可视化:

    各年龄段消费者每日购买商品总价值

    男女消费者每日借贷金额

    我们在使用presto做数据分析的时候,一般是跟hive联合起来使用,先从hive中创建相应的数据表,然后使用presto分析hive中的表。

    启动hive

    //启动hive的metastore nohup hive --service metastore >> /home/chs/apache-hive-2.1.1-bin/metastore.log 2>&1 & //启动hive server nohup hive --service hiveserver2 >> /home/chs/apache-hive-2.1.1-bin/hiveserver.log 2>&1 & //启动客户端 beeline 并连接 beeline beeline> !connect jdbc:hive2://master:10000/default hadoop hadoop

    创建用户表

    create table if not exists t_user ( uid STRING, age INT, sex INT, active_date STRING, limit STRING )ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;

    导入hdfs上的数据

    load data inpath '/data/sq' overwrite into table t_user;

    创建用户订单表

    create table if not exists t_order ( uid STRING, buy_time STRING, price DOUBLE, qty INT, cate_id INT, discount DOUBLE )ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;

    导入hdfs上的数据

    load data inpath '/data/t_order.csv' overwrite into table t_order;

    创建用户点击表

    create table if not exists t_click ( uid STRING, click_time STRING, pid INT, param INT )ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;

    导入hdfs上的数据

    load data inpath '/data/t_click.csv' overwrite into table t_click;

    创建借款信息表t_loan

    create table if not exists t_loan ( uid STRING, loan_time STRING, loan_amount STRING, plannum INT )ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;

    导入hdfs上的数据

    load data inpath '/data/t_loan.csv' overwrite into table t_loan;

    创建月借款总额表t_loan_sum

    create table if not exists t_loan_sum ( uid STRING, month STRING, loan_sum STRING )ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;

    导入hdfs上的数据

    load data inpath '/data/t_loan_sum.csv' overwrite into table t_loan_sum;

    启动Presto

    在安装目录下运行 bin/launcher start

    运行客户端 bin/presto server master:8080 catalog hive schema default

    连接hive !connect jdbc:hive2://master:10000/default hadoop hadoop

    开始查询分析

    第一题

    select t_user.age,t_order.buy_time,sum(t_order.price*t_order.qty-t_order.discount) as sum from t_user join t_order on t_user.uid=t_order.uid group by t_user.age,t_order.buy_time;

    部分结果

    +-------------+-------------------+----------------------+--+ | t_user.age | j_order.buy_time | sum | +-------------+-------------------+----------------------+--+ | 20 | 2016-11-17 | 1.7227062320000002 | | 25 | 2016-10-15 | 5.386111459 | | 25 | 2016-10-19 | 0.45088435299999996 | | 25 | 2016-10-20 | 2.8137519620000004 | | 25 | 2016-10-21 | 3.548087797 | | 25 | 2016-10-22 | 2.788946585 | | 25 | 2016-10-26 | 2.469814958 | | 25 | 2016-10-27 | 0.4795708140000001 | | 25 | 2016-10-30 | 2.8022007390000003 | | 25 | 2016-10-31 | 6.995954644 | ......

    第二题

    select t_user.sex,SUBSTRING(t_loan.loan_time,0,10) as time,sum(t_loan.loan_amount) as sum from t_user join t_loan on t_user.uid=t_loan.uid group by t_user.sex ,SUBSTRING(t_loan.loan_time,0,10);

    部分结果

    +--------


    Next Page: 10000

    Site Map 2018_01_14
    Site Map 2018_01_15
    Site Map 2018_01_16
    Site Map 2018_01_17
    Site Map 2018_01_18
    Site Map 2018_01_19
    Site Map 2018_01_20
    Site Map 2018_01_21
    Site Map 2018_01_22
    Site Map 2018_01_23
    Site Map 2018_01_24
    Site Map 2018_01_25
    Site Map 2018_01_26
    Site Map 2018_01_27
    Site Map 2018_01_28
    Site Map 2018_01_29
    Site Map 2018_01_30
    Site Map 2018_01_31
    Site Map 2018_02_01
    Site Map 2018_02_02
    Site Map 2018_02_03
    Site Map 2018_02_04
    Site Map 2018_02_05
    Site Map 2018_02_06
    Site Map 2018_02_07
    Site Map 2018_02_08
    Site Map 2018_02_09
    Site Map 2018_02_10
    Site Map 2018_02_11
    Site Map 2018_02_12
    Site Map 2018_02_13
    Site Map 2018_02_14
    Site Map 2018_02_15
    Site Map 2018_02_15
    Site Map 2018_02_16
    Site Map 2018_02_17
    Site Map 2018_02_18
    Site Map 2018_02_19
    Site Map 2018_02_20
    Site Map 2018_02_21
    Site Map 2018_02_22
    Site Map 2018_02_23
    Site Map 2018_02_24
    Site Map 2018_02_25
    Site Map 2018_02_26
    Site Map 2018_02_27
    Site Map 2018_02_28
    Site Map 2018_03_01
    Site Map 2018_03_02
    Site Map 2018_03_03
    Site Map 2018_03_04
    Site Map 2018_03_05
    Site Map 2018_03_06
    Site Map 2018_03_07
    Site Map 2018_03_08
    Site Map 2018_03_09
    Site Map 2018_03_10
    Site Map 2018_03_11
    Site Map 2018_03_12
    Site Map 2018_03_13
    Site Map 2018_03_14
    Site Map 2018_03_15
    Site Map 2018_03_16
    Site Map 2018_03_17
    Site Map 2018_03_18
    Site Map 2018_03_19
    Site Map 2018_03_20
    Site Map 2018_03_21
    Site Map 2018_03_22
    Site Map 2018_03_23
    Site Map 2018_03_24
    Site Map 2018_03_25
    Site Map 2018_03_26
    Site Map 2018_03_27
    Site Map 2018_03_28
    Site Map 2018_03_29
    Site Map 2018_03_30
    Site Map 2018_03_31
    Site Map 2018_04_01
    Site Map 2018_04_02
    Site Map 2018_04_03
    Site Map 2018_04_04
    Site Map 2018_04_05
    Site Map 2018_04_06
    Site Map 2018_04_07
    Site Map 2018_04_08
    Site Map 2018_04_09
    Site Map 2018_04_10
    Site Map 2018_04_11
    Site Map 2018_04_12
    Site Map 2018_04_13
    Site Map 2018_04_14
    Site Map 2018_04_15
    Site Map 2018_04_16
    Site Map 2018_04_17
    Site Map 2018_04_18
    Site Map 2018_04_19
    Site Map 2018_04_20
    Site Map 2018_04_21
    Site Map 2018_04_22
    Site Map 2018_04_23
    Site Map 2018_04_24
    Site Map 2018_04_25
    Site Map 2018_04_26
    Site Map 2018_04_27
    Site Map 2018_04_28
    Site Map 2018_04_29
    Site Map 2018_04_30
    Site Map 2018_05_01
    Site Map 2018_05_02
    Site Map 2018_05_03
    Site Map 2018_05_04
    Site Map 2018_05_05
    Site Map 2018_05_06
    Site Map 2018_05_07
    Site Map 2018_05_08
    Site Map 2018_05_09
    Site Map 2018_05_15
    Site Map 2018_05_16
    Site Map 2018_05_17
    Site Map 2018_05_18
    Site Map 2018_05_19
    Site Map 2018_05_20
    Site Map 2018_05_21
    Site Map 2018_05_22
    Site Map 2018_05_23
    Site Map 2018_05_24
    Site Map 2018_05_25
    Site Map 2018_05_26
    Site Map 2018_05_27
    Site Map 2018_05_28
    Site Map 2018_05_29
    Site Map 2018_05_30
    Site Map 2018_05_31
    Site Map 2018_06_01
    Site Map 2018_06_02
    Site Map 2018_06_03
    Site Map 2018_06_04
    Site Map 2018_06_05
    Site Map 2018_06_06
    Site Map 2018_06_07
    Site Map 2018_06_08
    Site Map 2018_06_09
    Site Map 2018_06_10
    Site Map 2018_06_11
    Site Map 2018_06_12
    Site Map 2018_06_13
    Site Map 2018_06_14
    Site Map 2018_06_15
    Site Map 2018_06_16
    Site Map 2018_06_17
    Site Map 2018_06_18
    Site Map 2018_06_19
    Site Map 2018_06_20
    Site Map 2018_06_21
    Site Map 2018_06_22
    Site Map 2018_06_23
    Site Map 2018_06_24
    Site Map 2018_06_25
    Site Map 2018_06_26
    Site Map 2018_06_27
    Site Map 2018_06_28
    Site Map 2018_06_29
    Site Map 2018_06_30
    Site Map 2018_07_01
    Site Map 2018_07_02
    Site Map 2018_07_03
    Site Map 2018_07_04
    Site Map 2018_07_05
    Site Map 2018_07_06
    Site Map 2018_07_07
    Site Map 2018_07_08
    Site Map 2018_07_09
    Site Map 2018_07_10
    Site Map 2018_07_11
    Site Map 2018_07_12
    Site Map 2018_07_13
    Site Map 2018_07_14
    Site Map 2018_07_15
    Site Map 2018_07_16
    Site Map 2018_07_17
    Site Map 2018_07_18
    Site Map 2018_07_19
    Site Map 2018_07_20
    Site Map 2018_07_21
    Site Map 2018_07_22
    Site Map 2018_07_23
    Site Map 2018_07_24
    Site Map 2018_07_25
    Site Map 2018_07_26
    Site Map 2018_07_27
    Site Map 2018_07_28
    Site Map 2018_07_29
    Site Map 2018_07_30
    Site Map 2018_07_31
    Site Map 2018_08_01
    Site Map 2018_08_02
    Site Map 2018_08_03
    Site Map 2018_08_04
    Site Map 2018_08_05
    Site Map 2018_08_06
    Site Map 2018_08_07
    Site Map 2018_08_08
    Site Map 2018_08_09
    Site Map 2018_08_10
    Site Map 2018_08_11
    Site Map 2018_08_12
    Site Map 2018_08_13
    Site Map 2018_08_15
    Site Map 2018_08_16
    Site Map 2018_08_17
    Site Map 2018_08_18
    Site Map 2018_08_19
    Site Map 2018_08_20
    Site Map 2018_08_21
    Site Map 2018_08_22
    Site Map 2018_08_23
    Site Map 2018_08_24
    Site Map 2018_08_25
    Site Map 2018_08_26
    Site Map 2018_08_27
    Site Map 2018_08_28
    Site Map 2018_08_29
    Site Map 2018_08_30
    Site Map 2018_08_31
    Site Map 2018_09_01
    Site Map 2018_09_02
    Site Map 2018_09_03
    Site Map 2018_09_04
    Site Map 2018_09_05
    Site Map 2018_09_06
    Site Map 2018_09_07
    Site Map 2018_09_08
    Site Map 2018_09_09
    Site Map 2018_09_10
    Site Map 2018_09_11
    Site Map 2018_09_12
    Site Map 2018_09_13