Next Page: 10000

          How Columnar Databases Support Modern Analytics      Cache   Translate Page   Web Page Cache   

The increased requirements of modern analytical workloads – querying billions of rows on demand, in real time and in unforeseen ways – is a challenge for traditional databases because they’re optimized for transactional workloads (e.g., point and range queries with indexes). A transactional query may return every column in a single row whereas an analytical query may aggregate a single column in every row. Thus, it is far more efficient to store data by column rather than by row. In addition, the use of distributed data and massively parallel processing enables columnar databases to support scalable, high-performance analytics. In this webinar, we will use the architecture of MariaDB AX to explain how columnar storage and massively parallel processing work, and how they enable columnar databases to query billions of rows in real time, and with the full power of SQL – a challenge for Apache Hadoop/Hive.
          系统管理员的 SELinux 指南:这个大问题的 42 个答案      Cache   Translate Page   Web Page Cache   

获取有关生活、宇宙和除了有关 SELinux 的重要问题的答案

“一个重要而普遍的事实是,事情并不总是你看上去的那样 …” ―Douglas Adams,《银河系漫游指南》

安全、坚固、遵从性、策略是末世中系统管理员的四骑士。除了我们的日常任务之外 —— 监控、备份、实施、调优、更新等等 —— 我们还需要负责我们的系统安全。即使这些系统是第三方提供商告诉我们该禁用增强安全性的系统。这看起来像《碟中碟》中 Ethan Hunt 的工作一样。

面对这种窘境,一些系统管理员决定去服用蓝色小药丸,因为他们认为他们永远也不会知道如生命、宇宙、以及其它一些大问题的答案。而我们都知道,它的答案就是这个 42

按《银河系漫游指南》的精神,这里是关于在你的系统上管理和使用 SELinux 这个大问题的 42 个答案。

  1. SELinux 是一个标签系统,这意味着每个进程都有一个标签。每个文件、目录、以及系统对象都有一个标签。策略规则负责控制标签化的进程和标签化的对象之间的访问。由内核强制执行这些规则。
  2. 两个最重要的概念是:标签化(文件、进程、端口等等)和类型强制(基于不同的类型隔离不同的的进程)。
  3. 正确的标签格式是 user:role:type:level(可选)。
  4. 多级别安全Multi-Level Security(MLS)强制的目的是基于它们所使用数据的安全级别,对进程(域)强制实施控制。比如,一个秘密级别的进程是不能读取极机密级别的数据。
  5. 多类别安全Multi-Category Security(MCS)强制相互保护相似的进程(如虚拟机、OpenShift gears、SELinux 沙盒、容器等等)。
  6. 在启动时改变 SELinux 模式的内核参数有:
    • autorelabel=1 → 强制给系统重新标签化
    • selinux=0 → 内核不加载 SELinux 基础设施的任何部分
    • enforcing=0 → 以许可permissive模式启动
  7. 如果给整个系统重新标签化:

    # touch /.autorelabel 
    # reboot
    

    如果系统标签中有大量的错误,为了能够让 autorelabel 成功,你可以用许可模式引导系统。

  8. 检查 SELinux 是否启用:# getenforce

  9. 临时启用/禁用 SELinux:# setenforce [1|0]

  10. SELinux 状态工具:# sestatus

  11. 配置文件:/etc/selinux/config

  12. SELinux 是如何工作的?这是一个为 Apache Web Server 标签化的示例:

    • 二进制文件:/usr/sbin/httpdhttpd_exec_t
    • 配置文件目录:/etc/httpdhttpd_config_t
    • 日志文件目录:/var/log/httpdhttpd_log_t
    • 内容目录:/var/www/htmlhttpd_sys_content_t
    • 启动脚本:/usr/lib/systemd/system/httpd.servicehttpd_unit_file_d
    • 进程:/usr/sbin/httpd -DFOREGROUNDhttpd_t
    • 端口:80/tcp, 443/tcphttpd_t, http_port_t

    httpd_t 安全上下文中运行的一个进程可以与具有 httpd_something_t 标签的对象交互。

  13. 许多命令都可以接收一个 -Z 参数去查看、创建、和修改安全上下文:

    • ls -Z
    • id -Z
    • ps -Z
    • netstat -Z
    • cp -Z
    • mkdir -Z

    当文件被创建时,它们的安全上下文会根据它们父目录的安全上下文来创建(可能有某些例外)。RPM 可以在安装过程中设定安全上下文。

  14. 这里有导致 SELinux 出错的四个关键原因,它们将在下面的 15 - 21 条中展开描述:

    • 标签化问题
    • SELinux 需要知道一些东西
    • SELinux 策略或者应用有 bug
    • 你的信息可能被损坏
  15. 标签化问题:如果在 /srv/myweb 中你的文件没有被正确的标签化,访问可能会被拒绝。这里有一些修复这类问题的方法:

    • 如果你知道标签:# semanage fcontext -a -t httpd_sys_content_t '/srv/myweb(/.*)?'
    • 如果你知道和它有相同标签的文件:# semanage fcontext -a -e /srv/myweb /var/www
    • 恢复安全上下文(对于以上两种情况):# restorecon -vR /srv/myweb
  16. 标签化问题:如果你是移动了一个文件,而不是去复制它,那么这个文件将保持原始的环境。修复这类问题:

    • 使用标签来改变安全上下文:# chcon -t httpd_system_content_t /var/www/html/index.html
    • 使用参考文件的标签来改变安全上下文:# chcon --reference /var/www/html/ /var/www/html/index.html
    • 恢复安全上下文(对于以上两种情况):# restorecon -vR /var/www/html/
  17. 如果 SELinux 需要知道 HTTPD 在 8585 端口上监听,使用下列命令告诉 SELinux:# semanage port -a -t http_port_t -p tcp 8585

  18. SELinux 需要知道是否允许在运行时改变 SELinux 策略部分,而无需重写 SELinux 策略。例如,如果希望 httpd 去发送邮件,输入:# setsebool -P httpd_can_sendmail 1

  19. SELinux 需要知道 SELinux 设置的关闭或打开的一系列布尔值:

    • 查看所有的布尔值:# getsebool -a
    • 查看每个布尔值的描述:# semanage boolean -l
    • 设置某个布尔值:# setsebool [_boolean_] [1|0]
    • 将它配置为永久值,添加 -P 标志。例如:# setsebool httpd_enable_ftp_server 1 -P
  20. SELinux 策略/应用可能有 bug,包括:

    • 不寻常的代码路径
    • 配置
    • 重定向 stdout
    • 泄露的文件描述符
    • 可执行内存
    • 错误构建的库

    开一个工单(但不要提交 Bugzilla 报告;使用 Bugzilla 没有对应的服务)

  21. 你的信息可能被损坏了,假如你被限制在某个区域,尝试这样做:

    • 加载内核模块
    • 关闭 SELinux 的强制模式
    • 写入 etc_t/shadow_t
    • 修改 iptables 规则
  22. 用于开发策略模块的 SELinux 工具:# yum -y install setroubleshoot setroubleshoot-server。安装完成之后重引导机器或重启 auditd 服务。

  23. 使用 journalctl 去列出所有与 setroubleshoot 相关的日志:# journalctl -t setroubleshoot --since=14:20

  24. 使用 journalctl 去列出所有与特定 SELinux 标签相关的日志。例如:# journalctl _SELINUX_CONTEXT=system_u:system_r:policykit_t:s0

  25. 当 SELinux 错误发生时,使用setroubleshoot 的日志,并尝试找到某些可能的解决方法。例如:从 journalctl 中:

    Jun 14 19:41:07 web1 setroubleshoot: SELinux is preventing httpd from getattr access on the file /var/www/html/index.html. For complete message run: sealert -l 12fd8b04-0119-4077-a710-2d0e0ee5755e
    
    # sealert -l 12fd8b04-0119-4077-a710-2d0e0ee5755e
    SELinux is preventing httpd from getattr access on the file /var/www/html/index.html.
    
    ***** Plugin restorecon (99.5 confidence) suggests ************************
    
    If you want to fix the label,
    /var/www/html/index.html default label should be httpd_syscontent_t.
    Then you can restorecon.
    Do
    # /sbin/restorecon -v /var/www/html/index.html
    
  26. 日志:SELinux 记录的信息全在这些地方:

    • /var/log/messages
    • /var/log/audit/audit.log
    • /var/lib/setroubleshoot/setroubleshoot_database.xml
  27. 日志:在审计日志中查找 SELinux 错误:# ausearch -m AVC,USER_AVC,SELINUX_ERR -ts today

  28. 针对特定的服务,搜索 SELinux 的访问向量缓存Access Vector Cache(AVC)信息:# ausearch -m avc -c httpd

  29. audit2allow 实用工具可以通过从日志中搜集有关被拒绝的操作,然后生成 SELinux 策略允许的规则,例如:

    • 产生一个人类可读的关于为什么拒绝访问的描述:# audit2allow -w -a
    • 查看允许被拒绝的类型强制规则:# audit2allow -a
    • 创建一个自定义模块:# audit2allow -a -M mypolicy,其中 -M 选项将创建一个特定名称的强制类型文件(.te),并编译这个规则到一个策略包(.pp)中:mypolicy.pp mypolicy.te
    • 安装自定义模块:# semodule -i mypolicy.pp
  30. 配置单个进程(域)运行在许可模式:# semanage permissive -a httpd_t

  31. 如果不再希望一个域在许可模式中:# semanage permissive -d httpd_t

  32. 禁用所有的许可域:# semodule -d permissivedomains

  33. 启用 SELinux MLS 策略:# yum install selinux-policy-mls。 在 /etc/selinux/config 中:

    SELINUX=permissive
    SELINUXTYPE=mls
    

    确保 SELinux 运行在许可模式:# setenforce 0

    使用 fixfiles 脚本来确保在下一次重启时文件将被重新标签化:# fixfiles -F onboot # reboot

  34. 创建一个带有特定 MLS 范围的用户:# useradd -Z staff_u john

    使用 useradd 命令,映射新用户到一个已存在的 SELinux 用户(上面例子中是 staff_u)。

  35. 查看 SELinux 和 Linux 用户之间的映射:# semanage login -l

  36. 为用户定义一个指定的范围:# semanage login --modify --range s2:c100 john

  37. 调整用户家目录上的标签(如果需要的话):# chcon -R -l s2:c100 /home/john

  38. 列出当前类别:# chcat -L

  39. 修改类别或者创建你自己的分类,修改如下文件:/etc/selinux/_<selinuxtype>_/setrans.conf

  40. 以某个特定的文件、角色和用户安全上下文来运行一个命令或者脚本:# runcon -t initrc_t -r system_r -u user_u yourcommandhere

    • -t 是文件安全上下文
    • -r 是角色安全上下文
    • -u 是用户安全上下文
  41. 在容器中禁用 SELinux:

    • 使用 Podman:# podman run --security-opt label=disable ...
    • 使用 Docker:# docker run --security-opt label=disable ...
  42. 如果需要给容器提供完全访问系统的权限:

    • 使用 Podman:# podman run --privileged ...
    • 使用 Docker:# docker run --privileged ...

就这些了,你已经知道了答案。因此请相信我:不用恐慌,去打开 SELinux 吧

作者简介

Alex Callejas 是位于墨西哥城的红帽公司拉丁美洲区的一名技术客服经理。作为一名系统管理员,他已有超过 10 年的经验。在基础设施强化方面具有很强的专业知识。对开源抱有热情,通过在不同的公共事件和大学中分享他的知识来支持社区。天生的极客,当然他一般选择使用 Fedora Linux 发行版。[这里][11]有更多关于他的信息。


via: https://opensource.com/article/18/7/sysadmin-guide-selinux

作者:Alex Callejas 选题:lujun9972 译者:qhwdw, FSSlc 校对:wxy

本文由 LCTT 原创编译,Linux中国 荣誉推出


          Phpmyadmin login window doesn't appear?      Cache   Translate Page   Web Page Cache   

Hi guys.I’m new here and new to php. I need big help.
Recently I was trying to install phpmyadmin 5 on my windows 8.1 computer without using xampp. I had already installed php …mysql and apache.

So when I was following a step by step tutorial…I reached a bump when I typed in " localhost/pma p.s pma is phpmyadmin folder where I extracted the phpmyadmin installation. After typing in the localhost…I don’t see the login page…but I see a list of files in the directory pma.

I really need your help…I will paste the error log to look at.


          Chilli Apache F1       Cache   Translate Page   Web Page Cache   
none
          These Oil Stocks Just Cashed in on the Permian Basin Pipeline Craze      Cache   Translate Page   Web Page Cache   

These Oil Stocks Just Cashed in on the Permian Basin Pipeline CrazeApache and Occidental Petroleum snagged premium prices for their midstream assets in the red-hot shale oil region.



          ExxonMobil joins Kinder Morgan, EagleClaw and Apache on Permian pipeline project      Cache   Translate Page   Web Page Cache   
ExxonMobil joins Kinder Morgan, EagleClaw and Apache on Permian pipeline project
More
          Robust Message Serialization in Apache Kafka Using Apache Avro, Part 3      Cache   Translate Page   Web Page Cache   

Part 3: Configuring Clients

Earlier, we introduced Kafka Serializers and Deserializers that are capable of writing and reading Kafka records in Avro format. In this part we will going to see how to configure producers and consumers to use them.

Setting up a Kafka Topic for use as a Schema Store

KafkaTopicSchemaProvider works with a Kafka topic as its persistent store. This topic will contain at most thousands of records: the schemas. It does not need multiple partitions,

Read more

The post Robust Message Serialization in Apache Kafka Using Apache Avro, Part 3 appeared first on Cloudera Engineering Blog.


          today's leftovers      Cache   Translate Page   Web Page Cache   

          Lead Developer/Technical Sales Support - ImageX - Vancouver, BC      Cache   Translate Page   Web Page Cache   
Proficiency in Linux administration, Apache configuration, MySQL database design, and PHP web development. We’re looking for a web coding whiz with experience...
From ImageX - Sat, 16 Jun 2018 06:51:11 GMT - View all Vancouver, BC jobs
          Drupal Developer - ImageX - Vancouver, BC      Cache   Translate Page   Web Page Cache   
Advanced proficiency in LAMP stack (Linux administration, Apache configuration, MySQL database, and PHP web development)....
From ImageX - Wed, 09 May 2018 10:29:40 GMT - View all Vancouver, BC jobs
          QA Automation Analyst - Autodata Solutions - London, ON      Cache   Translate Page   Web Page Cache   
Experience with web servers such as Apache HTTP Server or NGINX. Autodata is looking for a QA Automation Analyst to join their team in London!...
From Autodata Solutions - Wed, 08 Aug 2018 04:54:25 GMT - View all London, ON jobs
          Deployment Manager - Autodata Solutions - London, ON      Cache   Translate Page   Web Page Cache   
Proficiency with web servers such as Apache HTTP Server or NGINX. Autodata Solutions is looking for a Deployment Manager to join their growing London team!...
From Autodata Solutions - Fri, 03 Aug 2018 19:58:35 GMT - View all London, ON jobs
          The Edge Search: Bluehost Hosting 2018 Review       Cache   Translate Page   Web Page Cache   

Bluehost is one of the most affordable and reputable web hosting companies in the world. Established in 2003, they continue to grow and attract more than 20,000 new customers each month.

Known for rock-solid reliability, Bluehost's shared hosting comes complete with generous disk space and bandwidth, free domain name plus an array of additional hosting features such as automated backups and one-click WordPress installation.

By always going out of their way to help their customers, you can be sure of receiving all the technical support you need

Bluehost Features

Since the beginning of 2003, Bluehost has always provided high-quality service while keeping pace with the technical developments and improvements in the industry.

As the needs of webmasters have evolved, so too has the Bluehost offering. Their reputation for reliability and quality service has been well earned and is why they remain a leader in the ultra-competitive web hosting industry.
  1. Free Domain Name Included
  2. Unlimited Bandwidth
  3. Unlimited Disk Space
  4. Unlimited Add-On Websites
  5. 30 Day Money Back Guarantee

Visit Bluehost.com - 30 Day Money Back Guarantee. It's Risk Free!

Bluehost's outstanding service is combined with a feature-packed offering and industry leading uptimes. This is all possible due to their state-of-the-art network infrastructure. Their hi-tech data centre is very impressive and boasts Internet connectivity over their OC-48 connection at an incredible 2GB/sec bandwidth.

Quad processor servers, 24/7 monitoring, a diesel-powered backup generator and mirrored storage backups round out the data centre's notable list of features.
Total Domains:
2,163,617
  • .com
    1,810,607
  • .org
    167,674
  • .net
    130,598
  • .us
    21,703
  • .info
    20,392
  • .biz
    12,643


Even though Bluehost's basic hosting package is on a shared server, the specifications are far from typical when compared to industry standards for shared hosting plans.


Bluehost's servers run on 64-bit Linux distributions. What's more, excessive CPU load and server slowdowns are non-existent on this setup while super-fast site performance is maintained even at peak usage times. These specs are hard to match and leave Bluehost's competitors trailing in their wake.

Visit Bluehost.com - 30 Day Money Back Guarantee. It's Risk-Free!

Control Panel

For back-end administration, Bluehost provide the industry favourite cPanel interface. With its comprehensive features and intuitive design, webmasters have everything they need to easily launch and maintain their sites.

Furthermore, Bluehost's own Page Wizard application enables professional looking Web pages to be created with just a few clicks. Web-based file management and script support for Fantastico are just a few of the many other features that cPanel boasts.

Scripts

SimpleScripts, Mojo and Fantastico support enables users to quickly install a wide variety of popular software packages such as WordPress, Drupal, and Joomla. Never before has it been so easy to install blogs, forums, image galleries, polls and content management systems.

Fantastico de Luxe popularity amongst webmasters is a testament to its simple operation, and its inclusion in Bluehost's plan adds even more value to their already featured packed offering. For more advanced users who prefer installing scripts manually, Bluehost supports all popular scripting languages including:
  1. CGI
  2. Python
  3. PERL 5
  4. PHP4 & PHP5
  5. Ruby on Rails
  6. CRON jobs, Apache .htaccess and custom php.ini are also supported
WordPress is now one of the most widely used blogging and content management system platforms in the world and it's worth noting that Bluehost offers 1-Click installation of WordPress with all their hosting packages.


Visit Bluehost.com - 30 Day Money Back Guarantee. It's Risk-Free!

Uptime & Performance


Feature-laden, value for money hosting packages are important, but nothing is of greater importance than your web host providing basic network reliability. Claims of 99.9% uptime are all well and good but only the select few can back up their promises with actual data.


Bluehost is proud of their network integrity and list it as one of their key features. No longer do webmasters need to worry about losing business because their site is down. In fact, downtime is one of the most prevalent reasons why site owners shift from mediocre providers to a company like Bluehost who take their commitment to 99.9% up-time very seriously.

Independent testing on a site hosted by Bluehost revealed only thirty minutes of total downtime over a 90-day period. What's really impressive about this result is that, out of the total downtime, all thirty minutes were identified as planned maintenance. Bluehost's planned downtime is always scheduled during periods when Web traffic is off-peak to keep impact to a minimum.

Visit Bluehost.com - 30 Day Money Back Guarantee. It's Risk Free!


A webmaster will always find any amount of downtime unpalatable but 30 minutes in 90 days equates to 99.93% uptime, which is a highly impressive result. Given that performance is a key factor in the choice of a web hosting provider, we decided to undertake some performance testing of our own. We tested the page load time of the Bluehost homepage .

Bluehost Results: 

Homepage is loaded in 3.4 seconds
Homepage is fully loaded in 5.1 seconds

(Test server region: Dallas, USA. Connection: Cable (5/1 Mbps, 30ms). Date: 11 August 2013)

Help & Support

Bluehost offers customers several ways to access their technical support, one of which is the Bluehost Help Center. The Help Center contains a complete database of troubleshooting issues and fixes, together with instructions for hundreds of site-management tasks. It's the quickest and easiest way to get minor issues resolved.
Alternatively, clients with more complex problems can submit a help ticket through the Help Center and will receive email or live support as needed. Tickets are always answered in less than 12 hours, with most being addressed within just 1-2 hours.

Lastly, Live Phone support is also offered 24/7. This allows customers to speak directly with a technical expert and have all their questions comprehensively answered. Clients outside of the United States have not been forgotten either with additional phone numbers being provided specifically for them.

Bluehost Plans & Pricing

Bluehost have a straightforward approach to shared hosting. They only offer Linux shared server hosting on two simple plans; a standard hosting plan and a professional hosting plan (Bluehost also offers VPS, Dedicated Servers and Managed WordPress hosting).

STANDARD HOSTING PLAN

  • $3.95 per month
  • Unlimited Disk storage space
  • Unlimited Monthly Bandwidth
  • Unlimited Addon Domains (One free domain registration with account)
  • Unlimited Sub-domains
  • Unlimited Parked Domains
  • International Domains Supported
  • 1000 FTP Accounts (anonymous FTP support included)
  • Unlimited IMAP or POP3 E-mail Accounts
  • Secure IMAP Email Support
  • Unlimited Forwarding Email Addresses
  • Spam-Assassin Free-mail Filtering
  • cPanel Control Panel
  • 50 Postgre SQL or MySQL Databases
  • Frontpage 2000/2002/2003 Extensions
  • Ruby on Rails, CGI, Python, Perl 5, PHP 4&5 Scripts
  • Fully supported Server Side Includes (SSI)
  • SSH Shell Access
  • Fantastico Support
  • CRON Access and .htaccess
  • Free 1-Click Script Install
  • $100 Google Advertising Offer
  • 24/7 Phone, Chat & Email Support

Sign Up Now - Risk free - 30 day money back guarantee


PRO HOSTING PLAN

  • $19.95 per month
  • Unlimited Disk storage space
  • Unlimited Monthly Bandwidth
  • Unlimited Addon Domains (One free domain registration with account)
  • Unlimited Sub-domains
  • Unlimited Parked Domains
  • International Domains Supported
  • 1000 FTP Accounts (anonymous FTP support included)
  • Unlimited IMAP or POP3 E-mail Accounts
  • Secure IMAP Email Support
  • Unlimited Forwarding Email Addresses
  • Spam-Assassin Free-mail Filtering
  • cPanel Control Panel
  • 50 Postgre SQL or MySQL Databases
  • Frontpage 2000/2002/2003 Extensions
  • Ruby on Rails, CGI, Python, Perl 5, PHP 4&5 Scripts
  • Fully supported Server Side Includes (SSI)
  • SSH Shell Access
  • Fantastico Support
  • CRON Access and .htaccess
  • Free 1-Click Script Install
  • $100 Google Advertising Offer
  • 24/7 Phone, Chat & Email Support
  •  More CPU, Memory and Resources Added
  •  SiteBackup Pro Included
  •  Free Dedicated IP Address
  •  Free SSL Certificate
  •  Free Domain Name Privacy
  •  10 Free Postini
          Junior Software Engineer - Leidos - Morgantown, WV      Cache   Translate Page   Web Page Cache   
Familiarity with NoSql databases (Apache Accumulo, MongoDB, etc.). Leidos has job opening for a Junior Software Engineer in Morgantown, WV....
From Leidos - Wed, 25 Jul 2018 12:47:39 GMT - View all Morgantown, WV jobs
          Administrateur réseau et système - UPA - Longueuil, QC      Cache   Translate Page   Web Page Cache   
Serveurs web Apache, Tomcat ; Serveurs de base de données MySQL, SQL server, serveurs de courrier Exchange, Symantec ;...
From UPA - Fri, 20 Jul 2018 22:08:49 GMT - View all Longueuil, QC jobs
          New Feature: Redshift Spectrum now supports querying nested data      Cache   Translate Page   Web Page Cache   
You can now use Amazon Redshift to directly query nested data in Apache Parquet, Apache ORC, JSON and Amazon Ion file formats stored in external tables in Amazon S3.
...
          Offer - Builders Express Handyman Services - USA      Cache   Translate Page   Web Page Cache   
Website http://builders-express-handyman.services keyword/tags Phoenix Handyman Service, Tempe Handyman Service, Chandler Handyman Service, Gilbert Handyman Service, Scottsdale Handyman Service, Mesa Handyman Service, Peoria Handyman Service, Glendale Handyman Service, Avondale Handyman Service, Laveen Handyman Service, Buckeye Handyman Service, Litchfield Park Handyman Service, Suprise Handyman Service, Ahwatukee Handyman Service, Rio Verde Handyman Service, Paradise Valley Handyman Service, Fountain Hills Handyman Service, Carefree Handyman Service, Anthem Handyman Service, Gold Canyon Handyman Service, Apache Junction Handyman Service, Queen Creek Handyman Service, San Tan Valley Description Builders Express Handyman Services will be pleased to provide you with free over the phone Estimate for all handyman services and materials as well as an estimate time to complete any job. Our services include Furniture ASSEMBLY Service, Drywall Repairs & Finishing, Light Fixture Installation & Repair, Custom Shelving & Bookcases, Locks Rekey & Installation Garage Door Installations & Repair, Crown Molding Installation & Repair, Painting Services and much more. business owner name Jake Massey full address 2930 E Camelback Rd # 135A Phoenix, AZ 85016 phone no (602) 831-2125 Business email Info@handyman.nehemiahbuildersinc.com business hours 7 days a week 6:am to 9:00pm
          Install Customized Script (Soundcloud Clone) on Linux VPS      Cache   Translate Page   Web Page Cache   
We have a customized script that we would like to have installed. It will entail configuring the VPS with the required packages (php, MySql, etc) on the VPS. Database may be installed on a separate VPS... (Budget: $25 - $50 USD, Jobs: Apache, Linux, MySQL, PHP, System Admin)
          Adopt Apache a Black - with White Husky / Mixed dog in Manassas, VA (22817703) (Adopt-a-Pet.com)      Cache   Translate Page   Web Page Cache   
Adopt Apache a Black - with White Husky / Mixed dog in Manassas, VA (22817703) nice with cats, nice with dogs, great with children, housetrained, shots current

          Xk’oj ri k’ak’ k’amal b’e rech ri Wokajil Japache’      Cache   Translate Page   Web Page Cache   
  Eri K’amal b’e rech ri Wokajil Japache’ ri’ xy’ataj ub’ixik rumal che ri are xkoj k’amal b’e rech ri wokajil japache’ eri Héctor Javier Pozuelos López ri’. Eri K’amal b’e rech ri Sistema Penitenciario (SP), Camilo Morales Castro are xukoj che reqale’n ri ajno’chakunel Héctor Javier Pozuelos López ri’ are rech xkoj che upatan […]
          Offer - Builders Express Handyman Services - USA      Cache   Translate Page   Web Page Cache   
Website http://builders-express-handyman.services keyword/tags Phoenix Handyman Service, Tempe Handyman Service, Chandler Handyman Service, Gilbert Handyman Service, Scottsdale Handyman Service, Mesa Handyman Service, Peoria Handyman Service, Glendale Handyman Service, Avondale Handyman Service, Laveen Handyman Service, Buckeye Handyman Service, Litchfield Park Handyman Service, Suprise Handyman Service, Ahwatukee Handyman Service, Rio Verde Handyman Service, Paradise Valley Handyman Service, Fountain Hills Handyman Service, Carefree Handyman Service, Anthem Handyman Service, Gold Canyon Handyman Service, Apache Junction Handyman Service, Queen Creek Handyman Service, San Tan Valley Description Builders Express Handyman Services will be pleased to provide you with free over the phone Estimate for all handyman services and materials as well as an estimate time to complete any job. Our services include Furniture ASSEMBLY Service, Drywall Repairs & Finishing, Light Fixture Installation & Repair, Custom Shelving & Bookcases, Locks Rekey & Installation Garage Door Installations & Repair, Crown Molding Installation & Repair, Painting Services and much more. business owner name Jake Massey full address 2930 E Camelback Rd # 135A Phoenix, AZ 85016 phone no (602) 831-2125 Business email Info@handyman.nehemiahbuildersinc.com business hours 7 days a week 6:am to 9:00pm
          Home-Based Satellite TV Technician/Installer - DISH Network - Apache, OK      Cache   Translate Page   Web Page Cache   
Must possess a valid driver's license in the State you are seeking employment in, with a driving record that meets DISH's minimum safety standard.... $15 an hour
From DISH - Mon, 09 Jul 2018 19:17:49 GMT - View all Apache, OK jobs
          TELLER - FULL TIME - APACHE - Liberty National Bank - Apache, OK      Cache   Translate Page   Web Page Cache   
Responsible for Accurately processing financial transactions and being an effective source of information for our customers, in lobby, drive-thru window or by...
From Liberty National Bank - Fri, 03 Aug 2018 00:02:34 GMT - View all Apache, OK jobs
          Personal Care Aide - May's Plus, Inc. - Apache, OK      Cache   Translate Page   Web Page Cache   
Has a telephone and dependable transportation, valid driver’s license and liability insurance. Provides assistance with non-technical activities of daily living...
From May's Plus, Inc. - Tue, 17 Apr 2018 14:05:28 GMT - View all Apache, OK jobs
          Apache (APA) CEO John Christmann on Apache Corp & Kayne Anderson Acquisition Corp Altus Midstream Conference Call - Transcript      Cache   Translate Page   Web Page Cache   
none
          Leaky Amazon S3 Buckets: Challenges, Solutions and Best Practices      Cache   Translate Page   Web Page Cache   

Amazon Web Service (AWS) S3 buckets have become a common source of data loss for public and private organizations alike. Here are five solutions you can use to evaluate the security of data stored in your S3 buckets.

For business professionals, the public cloud is a smorgasbord of micro-service offerings which provide rapid delivery of hardware and software solutions. For security and IT professionals, though, public cloud adoption represents a constant struggle to secure data and prevent unexpected exposure of private and confidential information. Balancing these requirements can be tricky, especially when trying to adhere to your organization’s unique Corporate Information Security Policies and Standards.

Amazon Web Service (AWS) S3 buckets have become a common source of data loss for public and private organizations alike. Industry researchers and analysts most often attribute the root cause of the data loss to misconfigured services, vulnerable applications/tools, wide-open permissions, and / or usage of default credentials.

Recent examples of data leaks from AWS storage buckets include:

Data leakage is only one of the many risks presented by misuse of AWS S3 buckets. For example, attackers could potentially replace legitimate files with malicious ones for purposes of cryptocurrency mining or drive-by attacks.

To make matters worse for organizations (and simpler for hackers), automated tools are available to help find insecure S3 buckets.

How to protect data stored in AWS S3 buckets

Going back to the basics provides the most direct path to protecting your data. Recommended best practices for S3 buckets include always applying the principle of least privileges by using IAM policies and resource-based controls via Bucket Policies and Bucket ACLs.

Another best practice is to define a clear strategy for bucket content by taking the following steps:

  • Creating automated monitoring / audits / fixes of S3 bucket security changes via Cloud Trail, Cloud Watch and Lambda.
  • Creating a bucket lifecycle policy to transfer old data to an archive automatically based on usage patterns and age.
  • When creating new buckets, applying encryption by default via server-side encryption (SSE-S3/SSE-C/SSE-KMS) and / or client-side encryption.
  • Creating an S3 inventory list to automatically report inventory, replication and encryption in an easy to use CSV / ORC format.
  • Testing, testing and testing some more to make sure the controls mentioned above have been implemented effectively and the data is secure.

Here at Tenable, I have researched five additional solutions you can use to evaluate the security of data stored in S3 buckets. These five solutions, when implemented correctly and incorporated into daily operational checklists, can help you quickly assess your organization’s cyber exposure in the public cloud and help you determine next steps for securing your business-critical data.

  • Amazon Macie: Automates data discovery and classification. Uses Artificial Intelligence to classify data files on S3 by leveraging a rules engine that identifies application data, correlates file extensions and predictable data themes, with strong regex matching to determine data type, cloud trail events, errors and basic alerts.
  • Security Monkey: An open source bootstrap solution on github provided by Netflix. This implements monitoring, alerting and an auditable history of Cloud configurations across S3, IAM, Security Groups, Route 53, ELBs and SQS services.
  • Amazon Trusted Advisor: Helps perform multiple other functions apart from identifying insecure buckets.
  • Amazon S3 Inventory Tool: Provides either a CSV or ORC which further aids in auditing the replication and encryption status of objects in S3.
  • Custom S3 bucket scanning solutions: Scripts available on github can be used to scan and check specific S3 buckets. These include kromtech’s S3-Inspector and sa7mon’s S3Scanner. In addition, avineshwar’s slurp clone monitors certstream and enumerates s3 buckets from each domain.

With the business demanding speed and ease of use, we expect to see the continued evolution of applications, systems and infrastructure away from on-premises data centers secured behind highly segregated networks to cloud-based “X-as-a-Service” architectures. The solutions and guidance highlighted above will help you identify security gaps in your environment and bootstrap solutions to automate resolution, alerting and auditing, thereby helping you meet your organization's Corporate Information Security Policies and Standards.

Learn more:


          Comentario en XAMPP 7.1.10, instala de manera sencilla este servidor en Ubuntu 17.10 por Jaime      Cache   Translate Page   Web Page Cache   
Cordial Saludo, He realizado todos los pasos, y lo busco en el dash y no aparece, voy a la carpeta /usr/share/applications y lo ejecuto directamente y aparece un mensaje que dice "se ha producido un error al lanzar la aplicación", voy y corrijo xampp-control-panel.desktop y quito de exec= el siguinete texto "gksudo phyton" y guardo, así me ejecuta la ventana pero no inicia los servicios del apache y de mysql, qué hago en ese caso? De antemano gracias por su respuesta
          New comment on Item for Geeklist "Portland, OR Virtual Flea Market"       Cache   Translate Page   Web Page Cache   

by dwwatson

Related Item: Thunderbolt Apache Leader

GM sent
          How to root Apache M      Cache   Translate Page   Web Page Cache   
I greet you, dear users. Bought a smartphone Apache M, want to get root rights and extend the functionality of the device? Guideroot wil help you. What is the root root  it is a super administrator . They can significantly speed up the operation of the device, effectively adjust the energy … Continue reading
          The Coming of Cochise      Cache   Translate Page   Web Page Cache   
The name Cochise started appearing in the U.S. baby name data in the 1950s: 1958: unlisted 1957: 8 baby boys named Cochise 1956: unlisted 1955: unlisted 1954: 5 baby boys named Cochise [debut] 1953: unlisted Ultimately we know of this name through Cochise, the leader of the Chokonen Chiricahua Apaches during the 1860s and early […]
          [نرم افزار] دانلود XAMPP v7.2.8 - نرم افزار شبیه ساز وب سرور بر روی کامپیوتر      Cache   Translate Page   Web Page Cache   

دانلود XAMPP v7.2.8 - نرم افزار شبیه ساز وب سرور بر روی کامپیوتر#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

بسیاری از افراد و برنامه نویسانی که با زبان هایی مانند PHP کار می کنند می دانند که راه اندازی یک وب سرور مانند Apache و پیکربندی آن بر روی ویندوز و یا لینوکس بسیار سخت است و افزودن برنامه های مورد نیاز دیگری از جمله MySQL, PHP و Perl بسیار دشوار خواهد بود. XAMPP نرم افزاری فوق العاده است که این مشکلات را برای برنامه نویسان و طراحان وب علاقه مند به این سورس از بین برده است تا آن ها حتی با داشتن حداقل اطلاعات راجع به نصب نرم افزارهایی همچون MySQL, PHP, Apache بتوانند آن ها را ...


http://p30download.com/37583

مطالب مرتبط:



دسته بندی: دانلود » نرم افزار » توسعه وب » ASP/PHP, اینترنت » سرور
برچسب ها: , , , , , , , , , , ,
لینک های مفید: خرید کارت شارژ, شارژ مستقیم, پرداخت قبض, خرید آنتی ویروس, خرید لایسنس آنتی ویروس, تبلیغات در اینترنت, تبلیغات اینترنتی
© حق مطلب و تصویر برای پی سی دانلود محفوظ است همین حالا مشترک این پایگاه شوید!
لینک دانلود: http://p30download.com/fa/entry/37583


          [مکینتاش] دانلود Cyberduck v6.7.0 MacOSX - نرم افزار آپلود و دانلود اف تی پی برای مک      Cache   Translate Page   Web Page Cache   

دانلود Cyberduck v6.7.0 MacOSX - نرم افزار آپلود و دانلود اف تی پی برای مک#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Cyberduck نرم افزار اف تی پی کلاینت با رابط کاربری ساده است که دانلود و آپلود های شما را به راحتی مدیریت می کند. این نرم افزار انتقالات FTP, SFP, Webdav و Amazon S3 را پشتیبانی می کند. افزودن اتصال جدید به نرم افزار بسیار ساده است و تنها چند ثانیه زمان می برد. این نرم افزار با اغلب ویرایشگرهای خارجی مثل BBEdit, TextWrangler یا TextMate هماهنگ است همچنین مشکل هنگ کردن سی پی یو در پایان دانلود ها در نسخه جدید برطرف شده است. ...


http://p30download.com/78138

مطالب مرتبط:



دسته بندی: دانلود » مکینتاش » نرم افزار » شبکه, نرم افزار » اداری, نرم افزار
برچسب ها: , , , , , ,
لینک های مفید: خرید کارت شارژ, شارژ مستقیم, پرداخت قبض, خرید آنتی ویروس, خرید لایسنس آنتی ویروس, تبلیغات در اینترنت, تبلیغات اینترنتی
© حق مطلب و تصویر برای پی سی دانلود محفوظ است همین حالا مشترک این پایگاه شوید!
لینک دانلود: http://p30download.com/fa/entry/78138


          Set Up Apache Usergrid Cluster on AWS      Cache   Translate Page   Web Page Cache   
I need someone to set up and configure an Apache Usergrid cluster on AWS. Instructions are here: https://github.com/apache/usergrid/tree/master/deployment/aws I need this done within the next 72 hrs. (Budget: $250 - $750 CAD, Jobs: Amazon Web Services, Linux, Network Administration, System Admin)
          Apache Corporation Announces Cash Tender Offers for Up to $800...      Cache   Translate Page   Web Page Cache   

HOUSTON, Aug. 09, 2018 (GLOBE NEWSWIRE) -- Apache Corporation (NYSE, NASDAQ: APA) today announced the commencement of tender offers (each, an âOffer,â and collectively, the âOffersâ) to purc...

          ExxonMobil Joins Kinder Morgan, EagleClaw and Apache on Permian...      Cache   Translate Page   Web Page Cache   

ExxonMobil signs a letter of intent; XTO Energy to be a shipper on pipeline

Kinder Morgan Texas Pipeline LLC (KMTP), a subsidiary of Kinder Morgan, Inc. (NYSE: KMI), EagleClaw Midstream Ventures, LLC (EagleClaw), a portfolio company of Blackstone Energy Partners...

          Phpmyadmin login window doesn't appear?      Cache   Translate Page   Web Page Cache   

@skevingrafiks wrote:

Hi guys.I’m new here and new to php. I need big help.
Recently I was trying to install phpmyadmin 5 on my windows 8.1 computer without using xampp. I had already installed php …mysql and apache.

So when I was following a step by step tutorial…I reached a bump when I typed in " localhost/pma p.s pma is phpmyadmin folder where I extracted the phpmyadmin installation. After typing in the localhost…I don’t see the login page…but I see a list of files in the directory pma.

I really need your help…I will paste the page to look at.

Posts: 1

Participants: 1

Read full topic


          System Engineer      Cache   Translate Page   Web Page Cache   
NC-Charlotte, Charlotte, North Carolina Skills : Linux, Management, Shell, Tomcat, UNIX, WebLogic Description : Work Location Charlotte, NC/Minneapolis MN Job Title System Admin* Duration 12+ Months Job Details: Experience in 24X7 Production Support Environments - Experience with Tomcat - Experience with Apache - Experience with Unix/Linux environment - Experience with WebLogic. Nice to haves: Exposure or exper
          Found Black dog - Apache Junction, AZ US      Cache   Translate Page   Web Page Cache   
This Mutt was found recently in Apache Junction, AZ US.
          (USA-GA-Virtual Office GA 1.08) ETL Consultant-Network Solutions      Cache   Translate Page   Web Page Cache   
This ETL Developer (Engineer II) position in Network Business Intelligence - Engineering Applications (NBI-Apps) will be working in a fun, challenging, fast-paced environment to develop Extract-Transform-Load (ETL) processes which enable the engineering arm of Windstream to function more efficiently and effectively. We are looking for an ETL developer responsible for implementing the programmatic collection and consolidation of data from Windstream systems into an Engineering department RDBMS. Examples of the data categories included include network topology and performance, OSS, financial, parts/purchasing, and billing. All major vendors of RDBMS systems are in use at Windstream. Your primary focus will be development of Extract-Transform-Load logic using CloverETL, Python scripting, and Hadoop data integration and processing packages in addition to migration of legacy scripted solutions to these paradigms. Also included will be database development in DDL/DML (primarily Oracle), and operational support of the ETL software infrastructure and processes. On top of the Oracle database development skills and the ETL tool experience, experience with software development is essential. *_Job Responsibilities:_* * Development of Extract-Transform-Load logic using CloverETL and Python languages and systems to support business intelligence needs. * Migration of legacy scripted solutions to our newer ecosystem of tools (CloverETL, Python). * Database development in DDL and DML (primarily Oracle). * Building reusable code and libraries for future use. * Manage work through Agile tools/methodology, collaborative repositories, issue tracking platforms, and wikis. * Manage projects through to completion. * Effective communications in person and using JIRA, Confluence, email, and chat tools. * Effective collaboration in a dynamic team environment. * Independent project execution with minimal oversight. *_Essential Skills:_* * Extract-Transform-Load methodologies and patterns. * Oracle database development including SQL, DDL, and DML. * Javlin CloverETL development and deployment. Experience with comparable ETL tools (Informatica, Alteryx, MS DTS) will be considered. * Programming in the Bash and Python languages. Experience with comparable languages (Perl, TCL, NodeJS) will be considered. * Proficiency with code versioning tools, such as Git. * Data retrieval from files, web-based APIs, and RDBMS (Oracle, MySQL, MsSQL). * Experience working with large, disparate data sets. * Web Service technologies and APIs (REST, RPC, SOAP, etc.) * Data exchange formats: delimited, fixed-format, XML, JSON, and YAML. * Drive to succeed and improve personally, and in ability to add value to the role, team, and company. * Self-starter, relentlessly curious, resourceful, collaborative, and inventive. * Good team player and communicator. * Highly organized and meticulous. * Positive attitude and the desire to solve problems in elegant and creative ways. *_Desired Skills:_* * Apache Hadoop platform experience – Ambari, Pig, Hive, Hbase, Spark, etc. * Database warehousing and performance tuning experience helpful. * Java development experience. * Familiarity with command line operating systems and shells (Linux, Cisco IOS). * Network programming concepts: IPv4, sockets, SSL, port-forwarding. * Unix/Linux administration. * User experience with JIRA and Confluence. * Tableau visualization experience. Minimum Requirements: College degree in Engineering or a related field and 5-7 years professional level experience with 0-2 years supervisory experience for roles with supervision; or 9 years professional level related Engineering/Technical experience with 0-2 years supervisory experience for roles with supervision; or an equivalent combination of education and professional level related Engineering/Technical experience required. **Primary Location:** **US-Georgia-Virtual Office GA 1.08* **Job Category:** **Engineering* **EEO Statement:** **Employment at Windstream is subject to post offer, pre-employment drug testing. Equal Opportunity Employer including minority/female/disability/veteran; Without regard to** **Requisition ID:** *18002805*
          Help setting up Apache2 Server      Cache   Translate Page   Web Page Cache   
This might be a little long, so I apologize. I'm running Debian Stretch with KDE Plasma 5, kernel 4.9.0-7-amd64 on a Lenovo thinkpad x61 2Ghz, 4 GB RAM. I'm trying to run RISC with RPCemu and also...
          Azure HDInsight Interactive Query: Ten tools to analyze big data faster      Cache   Translate Page   Web Page Cache   

Customers use HDInsight Interactive Query (also called Hive LLAP, or Low Latency Analytical Processing) to query data stored in Azure storage & Azure Data Lake Storage in super-fast manner. Interactive query makes it easy for developers and data scientist to work with the big data using BI tools they love the most. HDInsight Interactive Query supports several tools to access big data in easy fashion. In this blog we have listed most popular tools used by our customers:

Microsoft Power BI

Microsoft Power BI Desktop has a native connector to perform direct query against HDInsight Interactive Query cluster. You can explore and visualize the data in interactive manner. To learn more see Visualize Interactive Query Hive data with Power BI in Azure HDInsight and Visualize big data with Power BI in Azure HDInsight .


Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Apache Zeppelin

Apache Zeppelin interpreter concept allows any language/data-processing-backend to be plugged into Zeppelin. You can access Interactive Query from Apache Zeppelin using a JDBC interpreter. To learn more please see Use Zeppelin to run Hive queries in Azure HDInsight .


Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Visual Studio Code

With HDInsight Tools for VS Code, you can submit interactive queries as well at look at job information in HDInsight interactive query clusters. To learn more please see Use Visual Studio Code for Hive, LLAP or pySpark .


Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Visual Studio

Visual Studio integration helps you create and query tables in visual fashion. You can create a Hive tables on top of data stored in Azure Data Lake Storage or Azure Storage. To learn more please see Connect to Azure HDInsight and run Hive queries using Data Lake Tools for Visual Studio .


Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Ambari Hive View

Hive View is designed to help you author, optimize, and execute queries. With Hive Views you can:

Browse databases. Write queries or browse query results in full-screen mode, which can be particularly helpful with complex queries or large query results. Manage query execution jobs and history. View existing databases, tables, and their statistics. Create/upload tables and export table DDL to source control. View visual explain plans to learn more about query plan.

To learn more please see Use Hive View with Hadoop in Azure HDInsight .


Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Beeline

Beeline is a Hive client that is included on the head nodes of HDInsight cluster. Beeline uses JDBC to connect to HiveServer2, a service hosted on HDInsight cluster. You can also use Beeline to access Hive on HDInsight remotely over the internet. To learn more please see Use Hive with Hadoop in HDInsight with Beeline .


Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Hive ODBC

Open Database Connectivity (ODBC) API, a standard for the Hive database management system, enables ODBC compliant applications to interact seamlessly with Hive through a standard interface. Learn more about how HDInsight publishes HDInsight Hive ODBC driver .


Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Tableau

Tableau is very popular data visualization tool. Customers can build visualizations by connecting Tableau with HDInsight interactive Query.


Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Apache DBeaver

Apache DBeaver is SQL client and a database administration tool. It is free and open-source (ASL). DBeaver use JDBC API to connect with SQL based databases. To learn more, see How to use DBeaver with Azure #HDInsight .


Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Excel

Microsoft Excel is the most popular data analysis tool and connecting it with big data is even more interesting for our customers. Azure HDInsight Interactive Query cluster can be integrated with Excel with ODBC connectivity.To learn more, see Connect Excel to Hadoop in Azure HDInsight with the Microsoft Hive ODBC driver .


Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Try HDInsight now

We hope you will take full advantage fast query capabilities of HDInsight Interactive Query using your favorite tools. We are excited to see what you will build with Azure HDInsight. Read this developer guide and follow the quick start guide to learn more about implementing these pipelines and architectures on Azure HDInsight. Stay up-to-date on the latest Azure HDInsight news and features by following us on Twitter #HDInsight and @AzureHDInsight . For questions and feedback, please reach out to AskHDInsight@microsoft.com .

About HDInsight

Azure HDInsight is Microsoft’s premium managed offering for running open source workloads on Azure. Azure HDInsight powers mission critical applications ranging in a wide variety of sectors including, manufacturing, retail education, nonprofit, government, healthcare, media, banking, telecommunication, insurance, and many more industries ranging in use cases from ETL to Data Warehousing, from Machine Learning to IoT, and more.

Additional resources Get started with HDInsight Interactive Query Cluster in Azure . Zero
          Where Quality is the Standard. FHA Approved!      Cache   Translate Page   Web Page Cache   
We service the Rim Country and White Mountains Area and most of the Flagstaff area, including Gila, Navajo, Apache and Coconino Counties. FHA Certified and are ready to assist you with all your residential appraisal needs. "Where quality is the standard."
Payson, AZ 85541
          Apache, MySQL & PHP on macOS Mojave      Cache   Translate Page   Web Page Cache   

Apple macOS 10.14 ships with both a recent version of Apache (2.4.x), as well as php (7.1.x), so you’ll just have to install mysql and go through a few steps to get everything up and running.

Apache

First, you have to create a web root in your user account:

mkdir ~/Sites

Then add a configuration for your user:

sudo tee /etc/apache2/users/$USER.conf <<EOF <Directory "$HOME/Sites/"> Options Indexes MultiViews FollowSymLinks AllowOverride All Require all granted </Directory> EOF

Now we have to make sure that our user config above actually gets loaded:

sudo tee -a /etc/apache2/other/$USER-settings.conf <<EOF Include /private/etc/apache2/users/*.conf EOF

If you want to use vhosts, you’ll also have to make sure that the vhosts config gets loaded:

sudo tee -a /etc/apache2/other/$USER-settings.conf <<EOF Include /private/etc/apache2/extra/httpd-vhosts.conf EOF

After that, configure vhosts as necessary in /etc/apache2/extra/httpd-vhosts.conf (don’t forget to remove the examples in there).

It seems that mod_rewrite no longer gets loaded by default, so we’ll also add that to our config:

sudo tee -a /etc/apache2/other/$USER-settings.conf <<EOF LoadModule rewrite_module libexec/apache2/mod_rewrite.so EOF

PHP

PHP doesn’t get loaded by default. So we’ll also add it to our config:

sudo tee -a /etc/apache2/other/$USER-settings.conf <<EOF LoadModule php7_module libexec/apache2/libphp7.so EOF

You should also configure a few settings in /etc/php.ini :

sudo tee -a /etc/php.ini <<EOF date.timezone = "`sudo systemsetup -gettimezone | awk '{print $3}'`" display_errors = on error_reporting = -1 EOF

To activate these settings you have to restart Apache:

sudo apachectl restart

If you also need PEAR/PECL, followthese instructions.

MySQL

MySQL is not shipped with macOS, so we’ll have to install that manually. Instead of going for an installer package, we’ll use Homebrew . Once Homebrew is installed, installing MySQL is as simple as:

brew install mysql

If you want to start MySQL automatically, run:

brew services start mysql

Any comments? Ping me on Twitter .


          Μια φωτογραφία ίσον οκτώ χρόνια Μνημονίων στον Ελληνικό Στρατό... ΤΙ είδες μέσα στο κιβώτιο αρχηγέ; ΦΩΤΟ      Cache   Translate Page   Web Page Cache   
Μια φωτογραφία ίσον οκτώ χρόνια Μνημονίων στον Ελληνικό Στρατό...
Μία φωτογραφία ίσον όλη η εικόνα στην οποία έχουν περιέλθει οι ελληνικές Ενοπλες Δυνάμεις και ειδικά ο Στρατός Ξηράς μετά από οκτώ χρόνια Μνημονίων και μερικά προγενέστερα χρόνια υποχρηματοδότησης των επιχειρησιακών αναγκών του. Είναι ο Αρχηγός ΓΕΣ αντιστράτηγος Αλκιβιάδης Στεφανής, ο οποίος πήγε στην 651 Αποθήκη Βάσεως Υλικού Πολέμου (ΑΒΥΠ), στον Άγιο Στέφανο Αττικής για να κάνει, τι; Να επιθεωρήσει...
κουτιά ανταλλακτικών!  Βέβαια, δεν είναι... τυχαία ανταλλακτικά, είναι τα ανταλλακτικά των μεταχειρισμένων εξοπλισμένων αναγνωριστικών ελικοπτέρων OH-58 Κiowa, 70 μονάδες από τα οποία θα παραλάβει ο Ελληνικός Στρατός για να αξιοποιήσει τελικά έναν αριθμό περί τα 30-40 από αυτά και τα υπόλοιπα θα τα χρησιμοποιήσει και αυτά ως ανταλλακτικά.   Αλλά δεν παύουν να είναι ανταλλακτικά.  Και η φωτό να δείχνει έναν «κοτζάμ» Αρχηγό να κοιτάζει... εκστασιασμένος ένα κουτί ανταλλακτικών, κατ'ευθείαν από τις αποθήκες παλαιού υλικού του αμερικανικού Στρατού.    

Ανεξάρτητα αν υπό προϋποθέσεις τα ΟΗ-58 Kiowa μπορούν να βοηθήσουν και ανεξάρτητα αν στην 651 ΑΒΥΠ γίνεται άριστη δουλειά, η εικόνα προκαλεί μελαγχολία: Οι «απέναντι» αεροπλανοφόρα, F-35, βαλλιστικά βλήματα, επιθετικά ελικόπτερα κλπ. κι εμείς «back του '50s» με έναν Αρχηγό Στρατού να επιθεωρεί... απορροφημένος από το θέαμα, κούτες με ανταλλακτικά από αποθήκες παλαιού πολεμικού υλικού τους αμερικανικού Στρατού!  Αλήθεια, μια και η κουβέντα για ανταλλακτικά ελικοπτέρων, υπάρχει κάποια σκέψη για αγορά ανταλλακτικών για τα AH-64A/D Apache; Μπας και πετάξουν (δεν εννοούμε 4-5 σε ασκήσεις ή παρελάσεις, έτσι;). Μπας...

          ADDM: review of some older CVE vulnerabilities      Cache   Translate Page   Web Page Cache   

This document contains official content from the BMC Software Knowledge Base. It is automatically updated when the knowledge article is modified.


PRODUCT:

BMC Discovery


COMPONENT:

BMC Atrium Discovery and Dependency Mapping


APPLIES TO:

BMC Atrium Discovery and Dependency Mapping



PROBLEM:

 

 A security tool reported some vulnerabilities for ADDM 9.0.

  

 

 


SOLUTION:

 

Here are some examples of vulnerabilities that were already reviewed:

  
    
  
  CVE-2004-2761
     
  Red Hat has addressed this issue in "Red Hat Certificate System 8#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" released in RHEL 5.3+ 
  
          => false positive 
  
           
  
  CVE-2009-3555
     
          impacts the package java-1.6.0-openjdk that is not installed in the ADDM 9+ appliance 
  
          => false positive 
  
   
      
   
   CVE-2010-4478 
       
           "Not vulnerable. This issue did not affect the versions of openssh as shipped with Red Hat Enterprise Linux 4, 5, or 6."  
   
           => false positive  
  
  
    
  
  CVE-2010-4755
     
          "We do not consider a denial of service flaw in a client application such as sftp to be a security issue." 
  
          => false positive 
  
   
      
   
   CVE-2010-5107 
1/ Google CVE-2010-5107 and look for the RedHat portal, in this case:   
       https://access.redhat.com/security/cve/CVE-2010-5107   
  
2/ On this page, look for the RHSA number, in this case "RHSA-2013:1591". See :   
    https://rhn.redhat.com/errata/RHSA-2013-1591.html   
  
3/ On this page, look for the entry for "Red Hat Enterprise Linux Server (v. 6)" and scroll down to the section titled "x86_64:"   
  
4/ In this section, you will see a list of packages that include fixes for this RHSA. Not all of these will apply to ADDM. You can compare this list to the packages listed in https://docs.bmc.com/docs/display/ADDMOSU/Latest+RHEL+6+operating+system+upgrade. If it's not listed here, the package is not installed in ADDM.   
  
5/ In this case, the packages that match both the RedHat page and our doc are:   
openssh-5.3p1-94.el6   
openssh-clients-5.3p1-94.el6   
openssh-server-5.3p1-94.el6   
  
and we can see that the version numbers from our doc match those of the RedHat page; therefore the fix is included in the latest ADDM OS upgrade.   
  
So in summary, we have the RedHat fix, but all the vulnerability scanner can see is that it's not OpenSSH v6.2.  
   
      
  
  
    
  
  CVE-2011-5000
     
          extract"[...] low security impact. A future update may address this issue." 
  
          => The ADDM OS upgrade embeds the RHEL fix when it is available (RedHat has not provided any plans for now). 
  
    
  
  CVE-2012-0814
     
          extract: "Not vulnerable. This issue did not affect the versions of openssh as shipped with Red Hat Enterprise Linux 4, 5, or 6." 
  
          => false positive 
  
    
  
  CVE-2012-2733
  
          according to 
     
          the solution is upgrade to  tomcat6-6.0.35-29_patch_06.ep6.el6.noarch.rpm 
  
          ADDM 9.0 contains tw-tomcat-6.0.35-2.rhel6.noarch 
  
          ADDM 9.0SP1 contains tw-tomcat-6.0.36-1.rhel6.noarch 
  
          => upgrade to ADDM 9.0SP1 to resolve the issue 
  
    
  
  CVE-2012-3546
  
          according to 
     
          It impacts "Apache Tomcat 6.x before 6.0.36" 
  
          => upgrade to ADDM 9.0SP1 to resolve the issue 
  
    
  
  CVE-2012-4431
     
          "This issue did not affect [...] tomcat6 as shipped with Red Hat Enterprise Linux 6" 
  
          => false positive 
  
    
  
    
  
  CVE-2012-4534
     
          this impacts "Apache Tomcat 6.x before 6.0.36" 
  
          => upgrade to ADDM 9.0SP1 to resolve the issue 
  
    
  
  CVE-2012-5568
  
          the link below provides a workaround for this low impact problem: 
     
    
  
    
  
  CVE-2012-5885/CVE-2012-5886/CVE-2012-5887
  
          according to 
           
          This impact "Apache Tomcat [...] before 6.0.36," 
  
          => upgrade to ADDM 9.0SP1 to resolve the issue 

 


Article Number:

000094984


Article Type:

Solutions to a Product Problem



  Looking for additional information?    Search BMC Support  or  Browse Knowledge Articles

          5 Open Source Security Risks You Should Know About      Cache   Translate Page   Web Page Cache   

By giving developers free access to well-built components that serve important functions in the context of wider applications, the open source model speeds up development times for commercial software by making it unnecessary to build entire applications completely from scratch.

However, with research showing that 78 percent of audited codebases contained at least one open source vulnerability, of which 54 percent were high-risk ones that hackers could exploit, there is clear evidence that using open source code comes with security risks. Such risks often don’t arise due to the quality of the open source code (or lack thereof) but due to a combination of factors involving the nature of the open source model and how organizations manage their software.

Read on to find out the five open source security risks you should know about.

Publicity of Exploits

The nature of the open source model is that open source projects make their code available to anybody. This has the advantage that the open source community can flag potential exploits they find in the code and give open source project managers time to fix the issues before publicly revealing information on vulnerabilities.

However, eventually such exploits are made publicly available on the National Vulnerability Database (NVD) for anyone to view. Hackers can use the publicity of these exploits to their advantage by targeting organizations that are slow to patch the applications that may be dependant on open source projects with recent vulnerabilities.

A pertinent example of issues due to publicly available exploits was the major Equifax breach in 2017 wherein the credit reporting agency exposed the personal details of 143 million people. The reason the exposure occurred was that attackers noticed Equifax used a version of the open source Apache Struts framework which had a high-risk vulnerability, and the hackers used that information to their advantage.

Dealing with this risk from the organization perspective means recognizing that open source exploits are made vulnerable and that hackers stand to gain a lot from attempting to breach services that use vulnerable components. Update as quickly as possible or pay the consequences.

Difficulty Managing Licenses

Single proprietary applications are often composed of multiple open source components, the projects for which are released under any of several license types, such as Apache License, GPL, or MIT License. This leads to difficulty in managing open source licenses considering the frequency with which enterprises develop and release software and the fact that over 200 open source license types exist.

Organizations are required to comply with all individual terms of different licenses, and non-compliance with the terms of a license puts you at risk of legal action, potentially damaging the financial security of your company.

Tracking licenses manually is prohibitively time-consuming―consider a software composition analysis tool that can automatically track all of the different open source components and licenses you use in your applications.

Potential Infringement Issues

Open source components may introduce intellectual property infringement risks because these projects lack standard commercial controls, giving a means for proprietary code to make its way into open source projects. This risk is evident in the real-world case of SCO Group , who contended that IBM stole part of the UnixWare source code and used it for their Project Monterey and sought billions of dollars in damages.

Appropriate due diligence into open source projects can flag up potential infringement risks.

Operational Risks

One of the main sources of risks when using open source components in the enterprise comes from operational inefficiencies. Of primary concern from an operational standpoint is the failure to track open source components and update those components as new versions become available. These updates often address high-risk security vulnerabilities, and delays can cause a catastrophe, as was the case in the Equifax breach.

It’s vital, therefore, to keep an inventory of your open source usage across all your development teams, not only to ensure visibility and transparency, but to avoid different teams using different versions of the same component. Keeping an inventory needs to become part of a dedicated policy on open source usage, and software composition analysis tools provide a means to enforce this practice in an automated, easily manageable way without manually updating spreadsheets.

Another issue is abandoned projects that perhaps begin with much active involvement from the open source community but eventually peter out until nobody updates them anymore. If such projects make their way into apps in the form of libraries or frameworks, your developers are responsible for fixing future vulnerabilities. Part of good inventory management is to track projects that are updated infrequently.

Developer Malpractices

Some security risks arise due to developer malpractices, such as copying and pasting code from open source libraries. Copying and pasting is an issue firstly because you copy any vulnerabilities that may exist in the project’s code when you do it, and secondly because there is no way to track and update a code snippet once it’s added to your codebase, making your applications susceptible to potential future vulnerabilities that arise. You can avoid this issue by creating an open source policy that specifically forbids copying and pasting snippets directly from projects to your application codebases.

Another malpractice that can occur is the manual transfer via email of open source components across teams. This is opposed to the recommended best practice which is to use a binary repository manager or a shared, secure network location for transferring components.

Conclusion

Open source is a highly useful model that deserves its current standing as the bedrock of the development of many modern applications. However, smart use of open source components involves acknowledgment of the security risks involved in using these components in your applications and prudent, proactive action to minimize the chances of these risks affecting your organization directly.


          Boca 2 - Libertad (Paraguay) 0 - Copa Libertadores 2018      Cache   Translate Page   Web Page Cache   
BOCA LE GANÓ A LIBERTAD EL PARTIDO DE IDA CON TANTOS DE ÁBILA Y ZÁRATE
Con Tévez afuera, Zárate aseguró el triunfo de Boca y le dio la razón al Mellizo
Con un buen nivel y un gol, su actuación justificó la decisión de Barros Schelotto de dejar en el banco a Carlitos, que jugó sólo los últimos 4 minutos. El técnico, luego, rechazó la polémica.
Justo ahora, cuando el debate en el fútbol nacional es la corona del Rey de Copas, la séptima es la obsesión azul y oro. Entonces, el reencuentro con su gente genera un cosquilleo. Es una noche de Libertadores, como tantas otras protagonizadas por Boca. No importa ese diluvio bíblico. Los hinchas cantan. Alientan. Pero el prisma se enfoca en el jugador top, el que rompió el mercado por el peso específico de su apellido, por el ruido que generó su llegada y porque relegó a uno de los máximos referentes.

De Mauro Zárate se trata, claro. Del delantero que dejó la Premier League para volver a Vélez y darle una mano en el peor momento deportivo. El que fue recibido como un rockstar y se despidió entre las sombras de Liniers. El que llegó a Boca con el cartel de figura y le copó la marquesina a Carlos Tévez.

Guillermo Barros Schelotto dejó claro en la pretemporada de Estados Unidos que Zárate es titular y Tévez entendió que tendrá que correr de atrás. Tenía una fuerte presión, entonces, en su debut en la Bombonera. Y pareció sentir esa mochila durante una gran parte del primer tiempo. Casi no apareció, dejándole la mayor responsabilidad con la pelota a Edwin Cardona.

Hasta que se soltó. Y metió una pared con Wanchope Ábila, el amigo de Tévez. Metió una gambeta corta y sacó un zurdazo letal que le puso un cierre prematuro al partido. Lo gritó con alma y vida. Fue un desahogo. Y miró al cielo. “Para mi tío, que era fanático de Boca y falleció hace un tiempo”, explica con el resultado consumado y la felicidad del deber cumplido.

¿Hubo egoísmo o confianza para dominar la pelota y sacudir el arco paraguayo? “A veces, la jugada del morfón me sale bien”, dice divertido. Su destello individual, esa convicción para resolver la maniobra, marca su impronta.

“Me siento bien, muy cómodo en esa posición”, agregó. Como media punta, detrás del “9”, en esta oportunidad Wanchope, cuando se recupere de la tendinitis aquileana, Darío Benedetto, Zárate encontró espacios para jugar recién en el segundo tiempo. Se movió con mayor libertad, superando el asedio de los rivales paraguayos. Y Carlitos miraba todo desde el banco...

No le gusta al delantero de Fuerte Apache. Pero apoya. Públicamente reconoció que arranca de atrás. En la intimidad, se esfuerza para demostrarle al Mellizo que quiere aportar su experiencia. Cuando ingresó en el último tramo del partido, Zárate ya no estaba. Lo había reemplazado Nahitan Nández. Tévez entró por Wanchope a cinco minutos del epílogo. Era el peor momento de Boca, que no terminó lamentando un gol de Libertad por los reflejos de Esteban Andrada, que manoteó un tiro de media distancia de Wilson Leiva y contó con la ayuda del travesaño.

Barros Schelotto dejó clara su postura en la conferencia de prensa. Cuando le preguntaron por el efecto que podía tener un jugador como Carlitos en el banco, sentenció: “Uno como entrenador sabe que tiene que tomar decisiones. El problema es de ustedes (por los periodistas), que empiezan a especular. Pero está todo bien. Está claro el mensaje. Juegue quien juegue, estamos representando a Boca y tenemos que dejar todo. Estamos en un nivel futbolístico muy parejo”.

Ganó Boca, tuvo muy pocos momentos de buen juego, pero la estrella del mercado pagó con un gol. El morbo es intenso. Las cámaras enfocaron a Carlitos después del gol de Mauro. Estaba hundido en un camperón. Satisfecho por el gol de su nuevo compañero, el que cerró el partido en el final del primer tiempo. Con la certeza de que tendrá que esforzarse el doble, más allá de su historia azul y oro.

El más aliviado es Guillermo, que cuenta con variantes de sobra en el ataque para el gran objetivo, ganar la Copa Libertadores.

          /net-sourceforge-MSSCodeFactory-CFInternet-2.10.10033-ApacheV2-src.zip      Cache   Translate Page   Web Page Cache   
none
          Desarrollador php      Cache   Translate Page   Web Page Cache   
Pienza meeti g de colombia sas - Bogotá DC - Desarrollar aplicaciones con conocimientos avanzados de: PHP, MySQL, HTML, Javascript, AJAX, XML ? Administrar sistemas/servidores:Linux/Windows/Apache/IIS ? Desarrollar aplicativos Web en plataformas tales como PHP, Python, Wordpress, Moodle que sean responsive web design. ? Con...
          HydroComp v2011      Cache   Translate Page   Web Page Cache   

crack software download CATENA.SIMetrix-SIMPLIS.8.0 DATEM Summit Evolution v6.8 GLOBE Claritas v6.6 Kepware v6.4
ttmeps#gmail.com ----- change "#" to "@"
Anything you need,You can also check here: ctrl + f

AMI.Vlaero.Plus.v2.3.0.10
2S.I. PRO_SAP RY2015b v15.0.1
Aquaveo Surface-water Modeling System Premium v11.2.12 Win64
Aquaveo.GMS.Premium.v10.0.11.Win64
Ashampoo.3D.CAD.Pro.v5.0.0.1
3DCS Variation Analyst MultiCAD v7.2.2.0 Win32_64
3DCS Variation Analyst v7.3.0.0 for CATIA V5 Win32_64
AGI.Systems.Tool.Kit(STK).v10.1.3
ANSYS Customization Tools (ACT) 16.0-16.1 Suite
ANSYS Electromagnetics Suite 16.2 Win64
Ansys Products v16.2 Win64Linux64
Ashampoo.3D.CAD.Architecture.5.v5.5.0.02.1
Ashampoo.3D.CAD.Professional.5.v5.5.0.01
Avenza Geographic Imager v5.0.0 for Adobe CS5-CC2015 Win32_64
Avenza MAPublisher v9.6.0 for Adobe CS5-CC2015 Win32_64
AVEVA.PDMS.V12.1 SP1
B&K Pulse v19.1
LEAP.Bridge.Steel.V8i.SS2.01.02.00.01
STAAD.Foundation.Advanced.V8i.SS3.07.02.00.00
BioSolveIT.SeeSAR.v3.2
AutoPIPE Vessel V8i SS1 v33.03.01.07
HAMMER V8i v08.11.06.58
WaterCAD & WaterGEMS V8i SS6 08.11.06.58
Cadence Allegro and OrCAD (Including ADW) v17.00.005
CadSoft.Computer.EAGLE.Professional.v7.3.0 x32x64
Carlson.Civil.Suite.2016.150731.Win32_64
Carlson.Precision.3D.2015.31933
CD-Adapco Star CCM+ 10.04.011 Win64Linu64
ClearTerra LocateXT ArcGIS for Server Tool 1.2 Win32_64
ClearTerra LocateXT Desktop 1.2 Win32_64
ClearTerra.LocateXT.ArcGIS.for.Server.Tool.v1.2.Win32_64
ClearTerra.LocateXT.Desktop.v1.2.Win32_64
CST Studio Suite 2015 +SP4
CD-ADAPCO.STAR-CCM.10.04.011-R8(double precision).Win64.&.Linux64
CES EduPack v2015
Schlumberger InSitu Pro 2.0
easycopy v8.7.8
Chasm.Ventsim.Visual.Premium.v4.0.6.1.Win32_64
Command.Digital.AutoHook.2016.v1.0.1.20
Corel.Corporation.CorelCAD.2015.v2015.5.Win32_64
Concept GateVision v5.9.7 Win&Linux
Crosslight.Apsys.2010.Win
Cmost Studio v2014
Delcam PowerMILL2Vericut v2016 Win64
Delcam PowerSHAPE 2016 Win64
DICAD.Strakon.Premium.v2015
DownStream Products v2015.6
DownStream Products v2015.8
DeskArtes.3Data.Expert.v10.2.1.7 x32x64
DeskArtes.Dimensions.Expert.v10.2.1.7.x32x64
DeskArtes.Sim.Expert.v10.2.1.7.x32x64
DriveWorks Pro 12.0 SP0
Kelton.Flocalc.Net v1.6.Win
Delcam.PowerINSPECT.2015.R2.SP1.Win32_64
DS DELMIA D5 V5-6R2014 GA
DAVID laserscanner 4.2.0.134 Pro
Elite.Software.Chvac.8.02.24.With.Drawing.Board.6.01
Elite.Software.Energy.Audit.7.02.113.Win
Elite.Software.Rhvac.9.01.157.With.Drawing.Board.6.01
PSS-ADEPT v5.0
ge interllution ifix v4.0
ESSCA OpenFlow v2012
Trimble RealWorks v6.5
ESRI CityEngine Advance 2015.1.2047 x64
Exelis ENVI v5.3,IDL v8.5,LiDAR v5.3 win64
EMIT.Maxwell.v5.9.1.20293
ESI PAM-FORM 2G v2013.0 Win
FEI.Amira.v6.0.1.Win32_64
FEI.Avizo.v9.0.1.Win32_64Linux.X64MACOSX
FIDES-DV.FIDES.CantileverWall.v2015.117 
FIDES-DV.FIDES.Flow.v2015.050
FIDES-DV.FIDES.GroundSlab.v2015.050 
FIDES-DV.FIDES.PILEPro.v2015.050 
FIDES-DV.FIDES.Settlement.2.5D.v2015.050
FIDES-DV.FIDES.Settlement.v2015.050 
FIDES-DV.FIDES.SlipCircle.v2015.050
FIDES-DV.FIDES.BearingCapacity.v2015.050
Global Mapper 16.2.5 Build 081915 x86x64
Graitec OMD v2015
rsnetworx for controlnet v11 cpr9 sr5
Harlequin Xitron Navigator v9 x32x64
HDL Works HDL Companion 2.8 R2 WinLnxx64
HDL Works IO Checker 3.1 R1 WinLnx64
HDL.Works.HDL.Design.Entry.EASE.v8.2.R6.for.Winlnx64
HEEDS.MDO.2015.04.2.Win32_64.&Linux64
Honeywell UniSim Design R430 English
thermoflow v24
Lakes Environmental AERMOD View v8.9.0
Lakes Environmental ARTM View v1.4.2
Lakes Environmental AUSTAL View v8.6.0
Mastercam.X9.v18.0.14020.0.Win64
McNeel.Rhinoceros.v5.0.2.5A865.MacOSX
McNeel.Rhinoceros.v5.SR12.5.12.50810.13095
Mintec.MineSight.3D.v7.0.3
MXGPs for ArcGIS v10.2 and v10.3
Moldex3D R13.0 SP1 x64
Mosek ApS Mosek v7.1 WinMacLnx
Midas.Civil.2006.v7.3.Win
NI Software Pack 08.2015 NI LabVIEW 2015
NI.LabVIEW.MathScript.RT.Module.v2015
NI.LabVIEW.Modulation.Toolkit.v2015
NI.LabVIEW.VI.Analyzer.Toolkit.v2015
NI.SignalExpress.v2015
NI.Sound.and.Vibration.Toolkit.v2015
NewTek.LightWave3D.v2015.2.Win32_64
NI LabWindows CVI 2015
HoneyWell Care v10.0
PACKAGE POWER Analysis Apache Sentinel v2015
Petrosys v17.5
Plexim Plecs Standalone 3.7.2 WinMacLnx
Power ProStructures V8i v08.11.11.616
Provisor TC200 PLC
Processing Modflow(PMWIN) v8.043
Proteus 8.3_SP1
QPS.Fledermaus.v7.4.4b.Win32_64
Siemens NX v10.0.2 (NX 10.0 MR2) Update Only Linux64
SIMULIA Isight v5.9.4 Win64 & Linux64
SIMULIA TOSCA Fluid v2.4.3 Linux64
SIMULIA TOSCA Structure v8.1.3 Win64&Linux64
Resolume Arena v4.2.1
Siemens Solid Edge ST8 MP01
TDM.Solutions.RhinoGOLD.v5.5.0.3
The.Foundry.NukeStudio.v9.0V7.Win64
Thinkbox Deadline v7.1.0.35 Win
ThirdWaveSystems AdvantEdge 6.2 Win64
Tecplot.360.EX.2015.R2.v15.2.1.62273.Win64
VERO SURFCAM 2015 R1
WAsP v10.2
Trimble.Inpho.SCOP++.5.6.x64         
Trimble.Inpho.TopDM.5.6.x64
Mentor.Graphics.FloEFD v15.0.3359.Suite.X64
Mentor Graphics FloTHERM Suite v11.1 Win32_64
Mentor.Graphics.FloTHERM.XT.2.3.Win64
Mentor_Graphics_HyperLynx v9.2 &Update1 Win32_64
Mentor.Graphics.FloVENT v11.1 Win32_64
Mentor.Graphics.FloMCAD Bridge 11.0 build 15.25.5
Mentor.Graphics.FloVIZ 11.1 Win32_64
Mentor.Graphics.FloTHERM PCB 8.0
Mentor.Graphics.Tanner.Tools.16.30.Win


          hadoop (3.1.1)      Cache   Translate Page   Web Page Cache   
The Apache™ Hadoop® project develops open-source software for reliable, scalable, distributed computing.

           Excel Work       Cache   Translate Page   Web Page Cache   
Basic android application to send promotional email automatically.Uses APACHE POI library to read Microsoft Excel files.Send email...
          How Do You Start In The Tech Sector?      Cache   Translate Page   Web Page Cache   

How Do You Start In The Tech Sector?
Career

August 9th, 2018

The tech sector, if you know what you're doing, is easier than most fields to get started in. However, you do have to know what you're doing. In this post, I'm going to step through a series of ways to get started, in case you're not sure.

Sounds easy, right? Well, nothing worthwhile's easy. Now, to be fair, I don't mean " if you know what you're doing " in any patronising or condescending way.

What I mean is that, unlike say being a GP , dentist , civil engineer , corporate lawyer , Queen's Council (QC) , etc., you don't need to have years of formal training.

What’s more, you don’t need to be registered with an industry group/board before you're allowed to work. These can include the Institute of Chartered Accountants , the Queensland Law Society , or the Queensland Bar Association .

In IT, however, most people whom I've spoken to over the years care far more for what you can do, rather than what a piece of paper says you could do.

Let's Say You Want to Write Code
How Do You Start In The Tech Sector?

If you want to write code, then start by learning the basics of a software development language. I'm not going to get into a flame war about one language or another, whether one's better than another or not.

That's for people with too much time on their hands, and for people who are too emotionally invested in their language(s) of choice ― or dare I say, just a bit insecure.

There are a host of languages to choose from, readily available on the three major operating systems ( linux , macOS , and windows ). Some of the most common, where you'll find the most amount of help and documentation, are php , Perl , C/C++ , Java , Go , Ruby , python , Haskell , and Lisp . Grab yourself and editor, or an IDE, learn it inside out, and get started learning to write code.

I've linked to a host of excellent online resources for each at the end of the article.

For my part, I prefer any language borne out of C/C++. I've written code in Visual Basic and Cobol and didn't come away from either experience positively.

Once you've learned the basics, start contributing to an open source project! You don't need to be overly ambitious, so the project doesn't need to be a big one.

It could be a small library, such as VIM for Technical Writers that I maintain every so often. It could, however, be the Linux Kernel too, if that's your motivation and you are feeling particularly ambitious.

Regardless of what you choose, by contributing to these projects you'll learn far faster and better than you likely could in any other way. Why?

Because you're working on real projects and have the opportunity to be mentored by people who have years of hands-on experience. You'll get practical, guided experience, the kind you'd likely take years to acquire on your own.

They'll help teach you good habits, best practices, patterns, techniques, and so much more; things you'd likely take ages to hear about, let alone learn.

What's more, you'll become part of a living, breathing community where ― hopefully ― you're encouraged to grow and appreciate the responsibilities and requirements of what it takes to ship software.

But I'd Rather Be a Systems Administrator?
How Do You Start In The Tech Sector?

The same approach can be broadly applied. Here’s my suggestion. Install a copy of Linux , BSD , or Microsoft Windows on an old PC or laptop. As you're installing it, have a look around at the tools that are available for it

hint:open source provides a staggering amount of choice. #justsayin .

Get to know how it's administered, whether via GUI tools (and the Power Shell) on Windows, or via the various daemons and their configuration files and command-line tools on Linux and BSD.

Server administration's a pretty broad topic, so it's hard ― if not downright impossible ― to suggest a specific set of tools to learn. I'm encouraging you at this point to get a broad understanding.

Later, if you're keen, you can specialise in a particular area. However, for now, get a broad understanding of:

Networking User and Group Management Installation Options and Tooling Service/Daemon configuration; and Disk Management.

Whether you're on Linux, BSD, or Windows, I've linked to a host of resources at the bottom of the article to help get you started.

Now that you've learned the fundamentals do something where people can critique you and hold you accountable, such as hosting a website of your own, through a provider such as Digital Ocean , or Linode .

The web server you use, whether Apache , NGINX , Lighttpd , or IIS doesn't matter. Just use one that works well on your OS of choice.

Once you've got it up and running, start building on the day to day tasks required to keep it up and running nicely. Once you've grown some confidence, move on to learning how to improve the site's security and performance, and deployment process.

This can include:

Optimising the web server, filesystem, and operating system configuration setting for maximum throughput Setting up an intrusion detection system (IDS); and Dockerising your site To Go Open Source or Microsoft?

By now you've got a pretty good set of knowledge. However, stop for just a moment, because it's time to figure out if you're going to specialise in open source (Linux/UNIX/BSD) or whether you're going to focus around Microsoft's tools and technologies.

You can become knowledgeable in both, and most developers and systems administrators that I know do have a broad range of knowledge in both. However, I'd suggest that it's easier to build your knowledge in one rather than attempting to learn both.

Depending on the operating system you've been using up until now, it's likely that you've already made your choice. However, it's good to stop and deliberately think about it.

What Do You Do Next?

Now, let's get back to building your skills. What do you do next? If you want to be a sys admin, start look around for opportunities to help others with their hosting needs.

Don't go all in ― yet . There's no need to rush. Keep stepping up gradually , building your confidence and skills.

If you're not sure of who might need help, have a think about:

What clubs are you involved in? Do you have friends with small businesses that might need support? Do you know others who want to learn what you have and need a mentor? I'm sure that as you start thinking, you'll be able to uncover other ideas and possibilities. Now you have to get out of your comfort zone, contact people and ask them if they need help.

Worst case scenario, they say no. Whatever! Keep going until you find someone who does want help and is willing to take you on.

Regardless of the path that you take, you should feel pretty confident in your foundational skills, because they're based on practical experience.

So, it's time to push further. To do that, I'd suggest contacting a University, a bank, or an insurance provider, if you want to cut your teeth on big installations.

Sure, many other places have big server installations. However, these three are the first that come to mind.

If you are focused on software development, here are a few suggestions:

Contact software development companies (avoid " digital agencies ") and see if they’re hiring. Talk to your local chamber of commerce and industry and let them know you’re around and what you do. Find the local business networking groups and go to the networking breakfasts. Get involved in your local user groups (this goes for sys admins too, btw). Start a user group if there isn’t one for what you want to focus on. In Conclusion

I could go on and on. The key takeaway I'm trying to leave you with is that, if you have practical experience , you'll increase the likelihood of gaining employment.

Any employer I've had of any worth values hands-on experience over a piece of paper any day.

Don't get me wrong; there's nothing wrong with degrees or industry certifications. And for complete transparency:

I have a Bachelor of Information Technology I'm LPIC-1 certified; and I’m a Zend (PHP 5) Engineer

However, university qualifications and industry certifications should only reinforce what you already know, and not be something that is used to get your start.

With all that said, I want to encourage you to go down the Open Source path, not Microsoft. But I’m biased, as I’ve been using Linux since 1999.

Regardless, have a chew on all that, and let me know what you think in the comments. I hope that, if you’re keen to get into IT, that this helps you do so, and clears up one or more questions and doubts that you may have.

Further Reading Open Source
          Hydromantis CapdetWorks v3.0      Cache   Translate Page   Web Page Cache   

crack software download CATENA.SIMetrix-SIMPLIS.8.0 DATEM Summit Evolution v6.8 GLOBE Claritas v6.6 Kepware v6.4
ttmeps#gmail.com ----- change "#" to "@"
Anything you need,You can also check here: ctrl + f

AMI.Vlaero.Plus.v2.3.0.10
2S.I. PRO_SAP RY2015b v15.0.1
Aquaveo Surface-water Modeling System Premium v11.2.12 Win64
Aquaveo.GMS.Premium.v10.0.11.Win64
Ashampoo.3D.CAD.Pro.v5.0.0.1
3DCS Variation Analyst MultiCAD v7.2.2.0 Win32_64
3DCS Variation Analyst v7.3.0.0 for CATIA V5 Win32_64
AGI.Systems.Tool.Kit(STK).v10.1.3
ANSYS Customization Tools (ACT) 16.0-16.1 Suite
ANSYS Electromagnetics Suite 16.2 Win64
Ansys Products v16.2 Win64Linux64
Ashampoo.3D.CAD.Architecture.5.v5.5.0.02.1
Ashampoo.3D.CAD.Professional.5.v5.5.0.01
Avenza Geographic Imager v5.0.0 for Adobe CS5-CC2015 Win32_64
Avenza MAPublisher v9.6.0 for Adobe CS5-CC2015 Win32_64
AVEVA.PDMS.V12.1 SP1
B&K Pulse v19.1
LEAP.Bridge.Steel.V8i.SS2.01.02.00.01
STAAD.Foundation.Advanced.V8i.SS3.07.02.00.00
BioSolveIT.SeeSAR.v3.2
AutoPIPE Vessel V8i SS1 v33.03.01.07
HAMMER V8i v08.11.06.58
WaterCAD & WaterGEMS V8i SS6 08.11.06.58
Cadence Allegro and OrCAD (Including ADW) v17.00.005
CadSoft.Computer.EAGLE.Professional.v7.3.0 x32x64
Carlson.Civil.Suite.2016.150731.Win32_64
Carlson.Precision.3D.2015.31933
CD-Adapco Star CCM+ 10.04.011 Win64Linu64
ClearTerra LocateXT ArcGIS for Server Tool 1.2 Win32_64
ClearTerra LocateXT Desktop 1.2 Win32_64
ClearTerra.LocateXT.ArcGIS.for.Server.Tool.v1.2.Win32_64
ClearTerra.LocateXT.Desktop.v1.2.Win32_64
CST Studio Suite 2015 +SP4
CD-ADAPCO.STAR-CCM.10.04.011-R8(double precision).Win64.&.Linux64
CES EduPack v2015
Schlumberger InSitu Pro 2.0
easycopy v8.7.8
Chasm.Ventsim.Visual.Premium.v4.0.6.1.Win32_64
Command.Digital.AutoHook.2016.v1.0.1.20
Corel.Corporation.CorelCAD.2015.v2015.5.Win32_64
Concept GateVision v5.9.7 Win&Linux
Crosslight.Apsys.2010.Win
Cmost Studio v2014
Delcam PowerMILL2Vericut v2016 Win64
Delcam PowerSHAPE 2016 Win64
DICAD.Strakon.Premium.v2015
DownStream Products v2015.6
DownStream Products v2015.8
DeskArtes.3Data.Expert.v10.2.1.7 x32x64
DeskArtes.Dimensions.Expert.v10.2.1.7.x32x64
DeskArtes.Sim.Expert.v10.2.1.7.x32x64
DriveWorks Pro 12.0 SP0
Kelton.Flocalc.Net v1.6.Win
Delcam.PowerINSPECT.2015.R2.SP1.Win32_64
DS DELMIA D5 V5-6R2014 GA
DAVID laserscanner 4.2.0.134 Pro
Elite.Software.Chvac.8.02.24.With.Drawing.Board.6.01
Elite.Software.Energy.Audit.7.02.113.Win
Elite.Software.Rhvac.9.01.157.With.Drawing.Board.6.01
PSS-ADEPT v5.0
ge interllution ifix v4.0
ESSCA OpenFlow v2012
Trimble RealWorks v6.5
ESRI CityEngine Advance 2015.1.2047 x64
Exelis ENVI v5.3,IDL v8.5,LiDAR v5.3 win64
EMIT.Maxwell.v5.9.1.20293
ESI PAM-FORM 2G v2013.0 Win
FEI.Amira.v6.0.1.Win32_64
FEI.Avizo.v9.0.1.Win32_64Linux.X64MACOSX
FIDES-DV.FIDES.CantileverWall.v2015.117 
FIDES-DV.FIDES.Flow.v2015.050
FIDES-DV.FIDES.GroundSlab.v2015.050 
FIDES-DV.FIDES.PILEPro.v2015.050 
FIDES-DV.FIDES.Settlement.2.5D.v2015.050
FIDES-DV.FIDES.Settlement.v2015.050 
FIDES-DV.FIDES.SlipCircle.v2015.050
FIDES-DV.FIDES.BearingCapacity.v2015.050
Global Mapper 16.2.5 Build 081915 x86x64
Graitec OMD v2015
rsnetworx for controlnet v11 cpr9 sr5
Harlequin Xitron Navigator v9 x32x64
HDL Works HDL Companion 2.8 R2 WinLnxx64
HDL Works IO Checker 3.1 R1 WinLnx64
HDL.Works.HDL.Design.Entry.EASE.v8.2.R6.for.Winlnx64
HEEDS.MDO.2015.04.2.Win32_64.&Linux64
Honeywell UniSim Design R430 English
thermoflow v24
Lakes Environmental AERMOD View v8.9.0
Lakes Environmental ARTM View v1.4.2
Lakes Environmental AUSTAL View v8.6.0
Mastercam.X9.v18.0.14020.0.Win64
McNeel.Rhinoceros.v5.0.2.5A865.MacOSX
McNeel.Rhinoceros.v5.SR12.5.12.50810.13095
Mintec.MineSight.3D.v7.0.3
MXGPs for ArcGIS v10.2 and v10.3
Moldex3D R13.0 SP1 x64
Mosek ApS Mosek v7.1 WinMacLnx
Midas.Civil.2006.v7.3.Win
NI Software Pack 08.2015 NI LabVIEW 2015
NI.LabVIEW.MathScript.RT.Module.v2015
NI.LabVIEW.Modulation.Toolkit.v2015
NI.LabVIEW.VI.Analyzer.Toolkit.v2015
NI.SignalExpress.v2015
NI.Sound.and.Vibration.Toolkit.v2015
NewTek.LightWave3D.v2015.2.Win32_64
NI LabWindows CVI 2015
HoneyWell Care v10.0
PACKAGE POWER Analysis Apache Sentinel v2015
Petrosys v17.5
Plexim Plecs Standalone 3.7.2 WinMacLnx
Power ProStructures V8i v08.11.11.616
Provisor TC200 PLC
Processing Modflow(PMWIN) v8.043
Proteus 8.3_SP1
QPS.Fledermaus.v7.4.4b.Win32_64
Siemens NX v10.0.2 (NX 10.0 MR2) Update Only Linux64
SIMULIA Isight v5.9.4 Win64 & Linux64
SIMULIA TOSCA Fluid v2.4.3 Linux64
SIMULIA TOSCA Structure v8.1.3 Win64&Linux64
Resolume Arena v4.2.1
Siemens Solid Edge ST8 MP01
TDM.Solutions.RhinoGOLD.v5.5.0.3
The.Foundry.NukeStudio.v9.0V7.Win64
Thinkbox Deadline v7.1.0.35 Win
ThirdWaveSystems AdvantEdge 6.2 Win64
Tecplot.360.EX.2015.R2.v15.2.1.62273.Win64
VERO SURFCAM 2015 R1
WAsP v10.2
Trimble.Inpho.SCOP++.5.6.x64         
Trimble.Inpho.TopDM.5.6.x64
Mentor.Graphics.FloEFD v15.0.3359.Suite.X64
Mentor Graphics FloTHERM Suite v11.1 Win32_64
Mentor.Graphics.FloTHERM.XT.2.3.Win64
Mentor_Graphics_HyperLynx v9.2 &Update1 Win32_64
Mentor.Graphics.FloVENT v11.1 Win32_64
Mentor.Graphics.FloMCAD Bridge 11.0 build 15.25.5
Mentor.Graphics.FloVIZ 11.1 Win32_64
Mentor.Graphics.FloTHERM PCB 8.0
Mentor.Graphics.Tanner.Tools.16.30.Win


          Database Administrator - David Aplin Group - Saskatoon, SK      Cache   Translate Page   Web Page Cache   
Understanding of Apache, Tomcat, Java, JBOSS, Spring, Hibernate, Struts, J2EE, Javascript/JQuery, HTML and .NET C# considered an asset....
From David Aplin Group - Thu, 02 Aug 2018 06:29:09 GMT - View all Saskatoon, SK jobs
          「中の人」が教える! 奇跡の巨大IT系ボランティア団体ASFの組織運営とは?      Cache   Translate Page   Web Page Cache   
 The Apache Software Foundation(ASF)はApache HTTP サーバーやTomcatなどの誰もが知るOSSプロジェクトを束ねる非営利団体です。GithubなどであまたのOSSが公開されている昨今、多数のユーザーからの支持獲得に成功した一部のOSSプロジェクトの裏では、それを支えるしっかりした組織運営があります。 まもなく20周年を迎えるASFという組織を「中の人」が紹介します。本記事は、ロンウイットのブログ(下記)を加筆修正したものです。
          problem generating proxy class      Cache   Translate Page   Web Page Cache   
hi,

i am using Axis to generate java proxy classes from wsdl as follow

D:\Axis>java -cp .;d:\axis\lib\wsdl4j-1.5.1.jar;d:\axis\lib\saaj.jar;d:\axis\lib
\jaxrpc.jar;d:\axis\lib\axis-ant.jar;d:\axis\lib\log4j-1.2.8.jar;d:\axis\lib\com
mons-discovery-0.2.jar;d:\axis\lib\commons-logging-1.0.4.jar;d:\axis\lib\axis.ja
r;d:\axis\lib\activation.jar;d:\axis\lib\mailapi.jar org.apache.axis.wsdl.WSDL2J
ava -N"urn:crmondemand/ws/ecbs/opportunity/10/2004"="crmondemand.ws.ecbs.opportu
nity.10.2004"
- Unable to find required classes (javax.activation.DataHandler and javax.mail.i
nternet.MimeMultipart). Attachment support is disabled.
Exception in thread "main" java.lang.NoClassDefFoundError: org/xml/sax/helpers/D
efaultHandler
at java.lang.ClassLoader.defineClass0(Native Method)
at java.lang.ClassLoader.defineClass(Unknown Source)
at java.security.SecureClassLoader.defineClass(Unknown Source)
no. of other such error
at org.apache.axis.wsdl.toJava.Emitter.<init>(Emitter.java:144)
at org.apache.axis.wsdl.WSDL2Java.createParser(WSDL2Java.java:209)
at org.apache.axis.wsdl.gen.WSDL2.<init>(WSDL2.java:96)
at org.apache.axis.wsdl.WSDL2Java.<init>(WSDL2Java.java:194)
at org.apache.axis.wsdl.WSDL2Java.main(WSDL2Java.java:371)

D:\Axis>-N"urn:/crmondemand/xml/Opportunity/Data"="crmondemand.xml.opportunity.D
ata"
The filename, directory name, or volume label syntax is incorrect.

D:\Axis>-N"urn:/crmondemand/xml/Opportunity/Query"=crmondemand.xml.opportunity.Q
uery Opportunity.wsdl
The filename, directory name, or volume label syntax is incorrect.

i have all the classes but still getting error...please help ....searching over net i found wsdl2java require url of web service... i downloaded wsdl file from CRM OD admin....

          Java Developer - ALTA IT Services, LLC - Clarksburg, WV      Cache   Translate Page   Web Page Cache   
Experience with the following technologies – J2EE, Weblogic, Java, Javascript, JQuery, AngularJS, Apache, Linux, Subversion, and GitHub....
From ALTA IT Services, LLC - Tue, 12 Jun 2018 17:33:52 GMT - View all Clarksburg, WV jobs
          Adopt Apache a Black - with White Husky / Mixed dog in Manassas, VA (22817703) (Adopt-a-Pet.com)      Cache   Translate Page   Web Page Cache   
Adopt Apache a Black - with White Husky / Mixed dog in Manassas, VA (22817703) nice with cats, nice with dogs, great with children, housetrained, shots current

          Hadoop Developer with Java - Allyis Inc. - Seattle, WA      Cache   Translate Page   Web Page Cache   
Working knowledge of big data technologies such as Apache Flink, Nifi, Spark, Presto, Elastic Search, DynamoDB and other relational data stores....
From Dice - Sat, 28 Jul 2018 03:49:51 GMT - View all Seattle, WA jobs
          Sr Software Engineer - Hadoop / Spark Big Data - Uber - Seattle, WA      Cache   Translate Page   Web Page Cache   
Under the hood experience with open source big data analytics projects such as Apache Hadoop (HDFS and YARN), Spark, Hive, Parquet, Knox, Sentry, Presto is a...
From Uber - Sun, 13 May 2018 06:08:42 GMT - View all Seattle, WA jobs
          Software Development Engineer - Big Data Platform - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
Experience with Big Data technology like Apache Hadoop, NoSQL, Presto, etc. Amazon Web Services is seeking an outstanding Software Development Engineer to join...
From Amazon.com - Wed, 08 Aug 2018 19:26:05 GMT - View all Seattle, WA jobs
          Sr. Technical Account Manager - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
You can also run other popular distributed frameworks such as Apache Spark, Apache Flink, and Presto in Amazon EMR;...
From Amazon.com - Wed, 01 Aug 2018 01:21:56 GMT - View all Seattle, WA jobs
          Permian Power: Apache and Kayne Anderson Birth a $3.5-Billion Midstream Corporation      Cache   Translate Page   Web Page Cache   

New company Altus Midstream will provide midstream ops for Apache’s Alpine High Apache Corp (ticker: APA) and Kayne Anderson Acquisition Corp (ticker: KAAC) announced today they will combine midstream assets, forming a $3.5 billion pure-play Permian midstream player. The combined company, called Altus Midstream, will be publicly traded. Altus’ primary asset will be Apache’s Alpine High gathering and processing systems,[Read More...]

The post Permian Power: Apache and Kayne Anderson Birth a $3.5-Billion Midstream Corporation appeared first on Oil & Gas 360.


          Database Administrator - David Aplin Group - Saskatoon, SK      Cache   Translate Page   Web Page Cache   
Understanding of Apache, Tomcat, Java, JBOSS, Spring, Hibernate, Struts, J2EE, Javascript/JQuery, HTML and .NET C# considered an asset....
From David Aplin Group - Thu, 02 Aug 2018 06:29:09 GMT - View all Saskatoon, SK jobs
          Hadoop Developer with Java - Allyis Inc. - Seattle, WA      Cache   Translate Page   Web Page Cache   
Working knowledge of big data technologies such as Apache Flink, Nifi, Spark, Presto, Elastic Search, DynamoDB and other relational data stores....
From Dice - Sat, 28 Jul 2018 03:49:51 GMT - View all Seattle, WA jobs
          Sr Software Engineer - Hadoop / Spark Big Data - Uber - Seattle, WA      Cache   Translate Page   Web Page Cache   
Under the hood experience with open source big data analytics projects such as Apache Hadoop (HDFS and YARN), Spark, Hive, Parquet, Knox, Sentry, Presto is a...
From Uber - Sun, 13 May 2018 06:08:42 GMT - View all Seattle, WA jobs
          Software Development Engineer - Big Data Platform - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
Experience with Big Data technology like Apache Hadoop, NoSQL, Presto, etc. Amazon Web Services is seeking an outstanding Software Development Engineer to join...
From Amazon.com - Wed, 08 Aug 2018 19:26:05 GMT - View all Seattle, WA jobs
          Sr. Technical Account Manager - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
You can also run other popular distributed frameworks such as Apache Spark, Apache Flink, and Presto in Amazon EMR;...
From Amazon.com - Wed, 01 Aug 2018 01:21:56 GMT - View all Seattle, WA jobs
          Java Backend Developer - C4J - Herentals      Cache   Translate Page   Web Page Cache   
Ben jij het meesterbrein achter eenvoudige oplossingen voor complexe problemen? Voldoen deze zowel aan de eisen van de klant als die van gebruiker? Werk je graag in een grootschalige, open source omgeving? Ben je een goeroe voor Junior developers? Welkom! Als back-end developer werk je met Java, Spring, Hibernate, Jersey, Jackson, Apache CXF, ... om oude back-end services te innoveren en nieuwe te implementeren. Een doos heeft voor jou geen wanden, enkel een opening om eruit te springen:...
          Home-Based Satellite TV Technician/Installer - DISH Network - Apache, OK      Cache   Translate Page   Web Page Cache   
Must possess a valid driver's license in the State you are seeking employment in, with a driving record that meets DISH's minimum safety standard.... $15 an hour
From DISH - Mon, 09 Jul 2018 19:17:49 GMT - View all Apache, OK jobs
          Personal Care Aide - May's Plus, Inc. - Apache, OK      Cache   Translate Page   Web Page Cache   
Has a telephone and dependable transportation, valid driver’s license and liability insurance. Provides assistance with non-technical activities of daily living...
From May's Plus, Inc. - Tue, 17 Apr 2018 14:05:28 GMT - View all Apache, OK jobs
          Womens Flight Pants have arrived      Cache   Translate Page   Web Page Cache   

It's here. The New Nena and Pasadena Womens Flight Pants.

New fits, new colours, new styles.

//

On trend and street-ready, the NXP Flight Pant is the prefect relax-fit peice to take you from a casual Sunday to Friday night drinks.

Available in Jet Black and Apache Green, the twill Flight Pant is the chino you've been waiting for, and the Arizona Blue deinm are your new favourite jeans.

 

  The new Flight Skinny is the fresh Nena and Pasadena take on the classic skinny silhouette.

Featuring a mid rise, slimline flight pocket detailing on the back and thigh and tapered elastic ankles, the Flight Skinny comes in a super-stretch pale blue denim, and blue-black wax finish versions. 


          Women's Flight Pants      Cache   Translate Page   Web Page Cache   

You’ve been borrowing from your brother; stealing from your boyfriend and now you can get your very own. The Women’s Flight Pant is here.

Nena And Pasadena take on the world of women's wear in eight #ontrend colour, including premium denim washes and slick cotton twill.

With the tough and adventurous spirit of the originals, the Women’s Flight Pant is designed to fit a feminine shape, with combat-style pockets and twin gusset crotch panelling to create a drop-crotch look.

Pull high onto the waist to wear as a skinny leg or emphasise the drop-crotch by buying a size up. Dress them up or down with heels or sneakers.

Women’s Flight Pants: better than the rest.

The Women’s Flight Pants are available in Wax Navy, Wax Black, Super Bleach, Acid Black, Black, Grey, Navy and Apache Green.

Shop Women's Flight Pant Arizona Blue


          Living the #highlife      Cache   Translate Page   Web Page Cache   

Nena and Pasadena's Spring HIGH LIFE collection showcases the label's signature pant styles. Witness the evolution of the Flight Pant and welcome the Champion Pant.


NENA AND PASADENA Bandana T-Shirt White
NENA AND PASADENA Flight Pant Apache Green
NENA AND PASADENA The Body Pocket T-Shirt White
NENA AND PASADENA Assassin Scoop Back Pocket T-Shirt Optical White
NENA AND PASADENA Flight Pant Snow Wash
NENA AND PASADENA Inventory Scoop Back Pocket T-Shirt Space Indigo
NENA AND PASADENA Champion Pant Forest Green
NENA AND PASADENA Devil Island T-Shirt True Black
NENA AND PASADENA Flight Pant Deep Indigo
NENA AND PASADENA Hyper T-Shirt Optical White
NENA AND PASADENA Flight Pant Arizona Blue
NENA AND PASADENA Bandana Pocket Muscle Vintage Black
NENA AND PASADENA Oh Mother T-Shirt Sublimated

          Flight Pant Drops Again      Cache   Translate Page   Web Page Cache   

Nena and Pasadena has released the second edition of its signature Flight Pant. The new pants are hitting shop floors in three new colours for you to choose from: Apache Green, Combat Straw and Bordeaux. Which will you buy?




NENA AND PASADENA Flight Pant Apache Green
NENA AND PASADENA Flight Pant Combat Straw
NENA AND PASADENA Flight Pant Bordeaux

          Développeur Java/JEE - Voonyx - Lac-beauport, QC      Cache   Translate Page   Web Page Cache   
Java Enterprise Edition (JEE), Eclipse/IntelliJ/Netbeans, Spring, Apache Tomcat, JBoss, WebSphere, Camel, SOAP, REST, JMS, JPA, Hibernate, JDBC, OSGI, Servlet,...
From Voonyx - Thu, 26 Jul 2018 05:13:45 GMT - View all Lac-beauport, QC jobs
          Java/JEE Developer - Voonyx - Lac-beauport, QC      Cache   Translate Page   Web Page Cache   
Java Enterprise Edition (JEE), Eclipse/IntelliJ/Netbeans, Spring, Apache Tomcat, JBoss, WebSphere, Camel, SOAP, REST, JMS, JPA, Hibernate, JDBC, OSGI, Servlet,...
From Voonyx - Thu, 26 Jul 2018 05:13:41 GMT - View all Lac-beauport, QC jobs
          Conseiller en architecture technologique web - iA Groupe financier - Québec City, QC      Cache   Translate Page   Web Page Cache   
Expert des technologies comme Microsoft IIS, API Gateway, IBM WebSphere, Netscaler, Apache, IBM MQ Series, WebService ou toutes autres technologies pertinentes...
From iA Financial Group / iA Groupe financier - Fri, 08 Jun 2018 06:16:49 GMT - View all Québec City, QC jobs
          Amazon Redshift announces support for nested data with Redshift Spectrum      Cache   Translate Page   Web Page Cache   

You can now use Amazon Redshift to directly query nested data in Apache Parquet, Apache ORC, JSON and Amazon Ion file formats stored in external tables in Amazon S3. Redshift Spectrum, a feature of Amazon Redshift, enables you to use your existing Business Intelligence tools and intuitive and powerful SQL extensions to analyze both scalar and nested data stored in your Amazon S3 data lake.


           365planet365.com was reported accessible in China       Cache   Translate Page   Web Page Cache   
URL: 365planet365.com
Title: Apache2 Debian Default Page: It works
Report Date: Aug 10, 2018 2:17:18 AM
Reporter Country: China
Reporter ISP:
Comments: Accessible in China according to https://en.greatfire.org/365planet365.com

          Comment on New Military Caliber Sparks Much (maybe too much) Debate by Rocketman      Cache   Translate Page   Web Page Cache   
Check out the .30 Apache. Better round with more stopping power at close range.
          Database Administrator - David Aplin Group - Saskatoon, SK      Cache   Translate Page   Web Page Cache   
Understanding of Apache, Tomcat, Java, JBOSS, Spring, Hibernate, Struts, J2EE, Javascript/JQuery, HTML and .NET C# considered an asset....
From David Aplin Group - Thu, 02 Aug 2018 06:29:09 GMT - View all Saskatoon, SK jobs
          Java Developer - ALTA IT Services, LLC - Clarksburg, WV      Cache   Translate Page   Web Page Cache   
Experience with the following technologies – J2EE, Weblogic, Java, Javascript, JQuery, AngularJS, Apache, Linux, Subversion, and GitHub....
From ALTA IT Services, LLC - Tue, 12 Jun 2018 17:33:52 GMT - View all Clarksburg, WV jobs
          Motos 2018: veja 10 lançamentos esperados até o fim do ano       Cache   Translate Page   Web Page Cache   

Kawasaki Ninja 400 e Dafra Apache 200 RTR são alguns dos modelos mais aguardados. Na alta cilindrada, BMW aposta nas F 750 GS e F 850 GS. Motos esperadas para o 2º semestre de 2018 no Brasil Rafael Miotto/Marcelo Brandt/Fabio Tito/G1/Divulgação Depois do Salão Duas Rodas 2017, muitos lançamentos de motos estavam programados para 2018. Parte deles já chegou às lojas no 1º semestre, mas novidades aguardadas, como Kawasaki Ninja 400 e Dafra Apache 200 RTR, ainda estão por vir e chegam até o final do ano. Carros 2018: veja 60 lançamentos esperados até o fim do ano Os segmentos que vão receber novas motocicletas são variados. Temos opções desde aventureiras, como a Royal Enfield Himalayan e as BMW F 750 GS e F 850 GS, até uma estradeira de alto luxo, no caso, a Honda Gold Wing. Veja lista de motos aguardadas: BMW F 750 GS A BMW já confirmou que vai atualizar a linha GS de média/alta cilindrada no Brasil. Seguindo a sua atualização natural, a F 700 GS deve ser substituída pela F 750 GS, que foi apresentada no último Salão de Milão. Além de novo visual, a moto teve evolução no motor. BMW F 750 GS e F 850 GS BMW/Divulgação F 850 GS Assim como a sua irmão F 750 GS, a F 850 GS será a substituta natural da F 800 GS, mas ainda não há uma data definida pela montadora. Assim como a F 750 GS, a F 850 chega no último trimestre do ano e será montada em Manaus. Dafra Apache 200 RTR Um dos principais destaques do Salão Duas Rodas 2017, a Apache 200 RTR será lançada no Brasil ainda no segundo semestre de 2018. Feita na Índia, a moto substituirá a atual Apache 150. A Dafra planeja novidades importantes para sua linha de motos urbanas em 2018. A principal delas será a Apache 200 RTR, modelo que substituirá a atual Apache 150. Dafra Apache 200 RTR no Salão Duas Rodas 2017 Marcelo Brandt/G1 Ducati Supersport Para tornar sua linha de esportivas mais acessível, a Ducati criou a Supersport. O nome foi resgatado do passado e o motor de 2 cilindros vem da linha Hypermotard: são 937 cc e 113 cavalos de potência. As primeiras unidades chegam às concessionárias em agosto. Ducati Supersport S Ducati/Divulgação Honda CB 1000 R Ainda não há confirmação da Honda, mas a CB 1000 R pode ser uma opção para o mercado brasileiro. Desde que a antiga CB 1000 R deixou de ser vendida por aqui, a montadora esta sem atuar no segmento de alta cilindrada. Recentemente, o modelo teve a patente registrada no país. Honda CB 1000 R Divulgação/Eicma Gold Wing Passando por sua maior mudança desde 2001, a nova geração da Honda Gold Wing está bem mais moderna e começa a ser vendida no Brasil neste 2º semestre. O modelo conta com modos de pilotagem, acelerador eletrônico, controle de tração, assistente de subidas, sistema start e stop e airbag. Honda Gold Wing no Salão Duas Rodas 2017 Marcelo Brandt/G1 Kawasaki Ninja 400 A Kawasaki Ninja 400 chega ao Brasil em 2018 para substituir a Ninja 300. O modelo ficou mais potente, com 45 cavalos, e ganhou novo visual. Kawasaki Ninja 400 no Salão Duas Rodas Marcelo Brandt/G1 Z900 RS Utilizando como a naked Z900, a Kawasaki criou um modelo com visual retrô, mas com cara clássica. O motor também mudou para favorecer o torque em baixos giros. Custando R$ 48.990, a moto chegou às lojas em julho. Kawasaki Z900 RS Rafael Miotto/G1 Royal Enfield Himalayan Depois de começar sua operação no Brasil em 2017, a Royal Enfield deve expandir sua linha no país. Depois das clássicas Bullet e Classic, o próximo passo será ampliar sua atuação para o segmento trail com a Himalayan. A expectativa é que a moto chegue no 2º semestre. Royal Enfield Himalayan Rafael Miotto / G1 Interceptor 650 Ainda não há nada confirmado oficialmente, mas a Royal Enfield pode trazer suas novas motos com motores de 2 cilindros ao Brasil. Uma delas é a Continental GT 650, modelo inspirado nas café-racers e versão mais “bombada” da Continental GT 500. Interceptor 650 Além da GT 650, a Royal também pode apostar da Interceptor 650, que traz o mesmo motor de sua “irmã”, porém, com linhas ainda mais clássicas. Royal Enfield Interceptor e Continental Royal Enfield/Divulgação
          Reddit: How to block all but LAN traffic on Apache      Cache   Translate Page   Web Page Cache   
submitted by /u/rsossl
[link] [comments]
          Apache POI online word and excel editor      Cache   Translate Page   Web Page Cache   
I would like to set up a report generator which need to use Apache POI and the following are needed: 1) Excel can be uploaded to online server and open and edit (java open and edit spreadsheet); 2) Excel data can be shown in html document e.g... (Budget: $2000 - $6000 HKD, Jobs: Java, Linux, MySQL, PHP, Software Architecture)
          Commerce OTP Hungary: Webshop lib and php 7.x      Cache   Translate Page   Web Page Cache   

Needed webshop lib seems to be incompatible with php 7.x.

https://www.otpbank.hu/static/portal/sw/file/Webshop_5.0.zip

I git this error message:
Error: Call to undefined function ereg_replace() LoggerDatePatternConverter->convert() függvényben (.../sites/all/libraries/otpwebshop/lib/apache/log4php/helpers/LoggerPatternConverter.php 289 sor).


          Search API Solr Search: Add result grouping support      Cache   Translate Page   Web Page Cache   

When using fields in grouping:

$grouping = array(
      "use_grouping"=>true,
      "group_sort" => ["field_change_date" => QueryInterface::SORT_DESC],
      "fields"=>array("nid")
);
$solrQuery->setOption("search_api_grouping",$grouping);

I get the following error (because of the grouping):

PHP message: Error: Cannot use object of type Drupal\search_api\Item\Field as array in /var/www/drupal/public_html/web/modules/contrib/search_api_solr/src/Plugin/search_api/backend/SearchApiSolrBackend.php on line 2462

And another because of the sort:

o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Can't determine a Sort Order (asc or desc) in sort spec 'Content » Änderungsdatum [field_change_date]

The result logic was completely commented out, so there were no results, i commented it back in and fixed it, now the grouping stuff should work again

I included a patch to fix this


          Delivering Real Time Analytics with Sitecore and DataStax      Cache   Translate Page   Web Page Cache   

In one of myearlier articles, I talked about why your company or organization should adopt Sitecore as your Experience Platform. Its a good platform for users, content authors, and developers to create compelling and engaging digital experiences as well as collect information on website traffic. Machine learning and analytics in personalized content are two of the most compelling features of Sitecore. In today’s world, companies particularly the Fortune 500, require real-time analytics to help drive stakeholder goals.

It’s Tradition

Traditionally Sitecore used MongoDB as their experience database (xDB) of choice for storing and retrieving analytics. However, with the latest version of Sitecore, the company is moving to more options for development teams to use to fit their needs especially if they require real-time analytics. There are now options for using SQL Server’s new provider for NoSQL data. In fact, at the time of the writing, the only option for Sitecore 9 xDB deployment is the SQL Server Provider. The company has planned support for MongoDB but sent a clear message with their change of xDB choice in the latest version. The platform is also looking at expanding to higher end distributed NoSQL databases such as Microsoft Azure CosmosDB. This would require an Azure subscription but would offer features to support distributed analytics.

Why DataStax?

DataStax Enterprise (DSE) is analways-on, distributed cloud database built on Apache Cassandra and designed for the hybrid cloud. Our firm is making the argument that Apache Cassandra, and more importantly DataStax, should be used as your Analytics xDB option if you are building experiences for the Right-Now Economy . These are usually systems which use IoT ( internet of things ) or have global demand from a user audience of hundreds of millions and thus can never fail. That goes double for the analytical operations you run on the real-time data you are storing.

DataStax over the Competition

Cassandra and DataStax clearly outperform MongoDB and other rivals in Throughput by Workload and Load Process benchmarks. They also provide no single point of failure and more consistency models to support high-level operations. Cassandra is completely free and open source and supports both cloud or on-premise (translation you won’t need an Azure subscription like CosmosDB) but the real special sauce is with DataStax. DataStax is a commercial product, however, it is almost always used if Cassandra is being deployed on an enterprise level scale. DSE integrates Cassandra with graph, search, analytics, administration, developer tooling, and monitoring all in one platform. With Mongo or other NoSQL competitors, developers would have to piece together these functionalities with third-party options instead of native out of the box support. Developers can also create Spark jobs and see analytical data or personalize content in real time no matter how many users are viewing the experience. Other systems support Spark, however, they are usually deployed in a master to slave or parent to child relationship providing points of failure for both your users and your analytical operations. Furthermore, they tend to face challenges when an application needs to be global.

OurBusiness Platform Services

Need help with a Business Platform implementation or guidance in creating a tailor-fit design & architecture? Our team has decades of Business Platform experience and can help you transition onto the next phase of your technology eco-system, whether it be using Sitecore and DataStax, or simply a combination of common SaaS software like WordPress and Salesforce. Don’t know where to start? Check out ourservices or send us aquick email!

Resources DataStax Corporate DSE vs MongoDB

Photo by Carlos Muza on Unsplash


          More Cloud Firestore Improvements!      Cache   Translate Page   Web Page Cache   

Source: More Cloud Firestore Improvements! from Firebase


More Cloud Firestore Improvements!
More Cloud Firestore Improvements!

Todd Kerpelman

Developer Advocate

Well, I think it’s safe to say there’s been more news coming out of Cloud Next 2018 than you can shake a stick at. And while your manager is probably asking you to stop shaking sticks at news stories and get back to work, you might still be wondering, “How does all of this affect me, your typical app developer using Cloud Firestore?”

Good question! It turns out we’ve made some really nice improvements in Cloud Firestore that are either out already, or coming your way soon. Let’s go over them together, shall we?

Single field index controls

One new feature we’ve announced is the ability to disable the automatic indexing of a field in your documents. Why would you ever want to do this? Primarily, the issue is that Cloud Firestore will index the value of any field in a document, but if those fields happen to contain maps or arrays, it will then go ahead and recursively index every single value within those objects, too.

In many cases ― like storing a mailing address as a map ― this is exactly what you want, so you can search for documents by city or zip code. But if you’re storing a bunch of, say, raw drawing data into a massive array, you definitely don’t need those fields indexed. That’s going to cost you unnecessary time and storage costs, and runs the risk of bumping up against your “20k indexed fields per document” limit.


More Cloud Firestore Improvements!

So in these situations, we’ve given you the ability to disable the automatic indexing of those fields. This will stop Cloud Firestore from indexing the value of that field, along with recursive indexing any arrays or maps it might find inside that field. So you can continue to index your “ address ” field above, while leaving your “ big_drawing ” field unindexed.

Single field index controls also help with that whole “You can’t do more than 500 writes per second in a collection where documents have constantly increasing or decreasing values” limit , which is something you could run into if you were, for example, trying to store timestamps in a very large collection. By disabling indexing on that timestamp field, you won’t have to worry about this limit, although it does mean you will no longer be able to query that collection by that timestamp.

New Locations for Cloud Firestore

Many customers have been asking us for the ability to keep their Cloud Firestore data in specific locations around the world. This can be a significant performance boost if you think the majority of your customers are going to be located in a certain part of the world, and it may help you comply with local regulations around how your data is stored.

So we’re pleased to announce that we’re starting the process of adding new locations to host your Cloud Firestore data. We’re going to start with Frankfurt, Germany and South Carolina (in the U.S.), but you can expect to see more locations being added over the next several months.

You should see the “Cloud Firestore location” option when you create a new project in the Firebase console ― this will determine where your Cloud Firestore data is located. Please note that once you’ve selected a location, it can’t be changed later, so choose carefully!


More Cloud Firestore Improvements!
Import / Export your data

We’ve also added the ability for you to import and export your Cloud Firestore data. This is useful if you ever want to make backups of your data, it gives you the freedom to migrate your data to another database if you ever wanted to, and it makes it easy for you to copy data between projects. That last feature can come in really handy if you want to migrate data from your production project into your “test”, “dev” or “staging” project.

Exports from Cloud Firestore will be stored into your Google Cloud Storage bucket, and from there you can use any number of tools to move your data into other systems. For more details, make sure to check out our documentation .

Faster Security Rules deployment

We’re also happy to report that we’ll be speeding up the time in which security rules are deployed and made active in your project. Thanks to some serious improvements our engineers have made under the hood, the time it takes for security rules to become active in your project has moved from “a couple of minutes after you deploy them” to “a few seconds”. This should make testing security rules in your app a much better experience than before. Look for this feature to roll out sometime over the next week.

Higher Beta Limits

As we’ve made improvements on the backend, we’ve been able to increase some of the limits that were placed on Cloud Firestore while the product is in beta. You can now perform up to 10,000 writes per second per database (up from 2500) and Cloud Firestore supports up to 1 million concurrent connections (up from 100,000) .

Two Modes for Google Cloud Platform developers

Cloud Firestore now runs in two modes ― Native mode and Datastore mode. Native mode is probably what you’ve been using all this time as a Firebase developer, and it’s now available to Google Cloud Customers who want to add Cloud Firestore to their GCP projects.

Datastore mode is a 100% backwards-compatible version for developers who have been using Cloud Datastore up until now. It doesn’t contain all of the features of Cloud Firestore (like real-time updates or offline support), but it does add some nice improvements like strong consistency and removes some limits around writes and transactions.

All current Cloud Datastore customers will be seamlessly upgraded to Cloud Firestore in Datastore mode when Cloud Firestore reaches General Availability. For more information, be sure to see this post on the Google Cloud Platform blog.

So there ya go, folks. Lots of fun new features to play with in Cloud Firestore. Give ’em a try and let us know what you think! As always, if you have questions, you can join the Cloud Firestore Google group, or use the google-cloud-firestore tag on Stack Overflow.

除非特别声明,此文章内容采用 知识共享署名 3.0 许可,代码示例采用 Apache 2.0 许可。更多细节请查看我们的 服务条款 。


          Cloudera CCA 175 Spark Developer Certification: Hadoop Based      Cache   Translate Page   Web Page Cache   

Cloudera CCA 175 Spark Developer Certification: Hadoop Based
Description

Featured on: Aug 2, 2018

Get Hands-on Experience as to how they themselves can become Spark Application Developers. Become masters at working with Spark DataFrames, HiveQL, and Spark SQL. Understand how to control importing and exporting of Data in Spark through Apache Sqoop in the exact format that is needed. Learn all Spark RDDs Transformations and Actions needed to analyze Big Data. Become absolutely ready for the Cloudera Spark CCA 175 Certification Exam. This course is designed to cover the end-to-end implementation of the major components of Spark. I will be giving you hands on experience and insight into how big data processing works and how it is applied in the real world. We will explore Spark RDDs, which are the most dynamic way of working with your data. They allow you to write powerful code in a matter of minutes and accomplish whatever tasks that might be required of you. They, like DataFrames, leverage the Spark Lazy Evaluation and Directed Acyclic Graphs (DAG) to give you 100x better functionality than MapReduce while writing less than a tenth of the code. You can execute all the Joins, Aggregations,Transformations and even Machine Learning you want on top of Spark RDDs. We will explore these in depth in the course and I will equip you with all the tools necessary to do anything you want with your data.
          Free HBase Quiz Questions & Answers 2018 Part 1      Cache   Translate Page   Web Page Cache   
1. Top HBase Quiz Questions

Now,you have a good understanding of HBase . So, its time to try your hands on the Free online HBase Quiz Questions and test yourself. This quiz provides HBase MCQs questions with their answers and explanation. These Best HBase Quiz Questions will build up your confidence while appearing for HBase Interview. Answer all these HBase Quiz Questions, it will help you to increase your knowledge. This online HBase test will help both freshers & experienced.

So, let’sexplore HBase Quiz Questions.


Free HBase Quiz Questions &amp; Answers 2018 Part 1

Free HBase Quiz Questions & Answers 2018 Part 1

Que 1. There are 2 programs which confirm a write into Hbase. One is write-ahead log(WAL) and the other one is

Mem confirm log

Write complete log

log store

Memstore

The write access log and Memstore confirm the writing of a Hbase value.

Que 2. A record deleted in Hbase is not removed from Hbase immediately. Instead it is written to another file and marked as Delete. Such a file is Known as

DFile

Tombstone

Tombfile

Earmark

The deleted records are stored in file is known as Tombstone.

Que 3. The values stored inside a cell which is identified using a rowkey, column family and column qualifier is stored as

Byte

varchar

Nchar

number

The data stored inside a cell is always in Byte format.

Que 4. All MapReduce jobs reading from an Hbase table accept their[K1,V1] pair in the form of [rowid:cell value] [rowkey:scan result] [column Family:cell value] [column attribute:scan result] The key and value in a mapreduce job reading from a Hbase table correspond to the [rowkey:scan result] values. Que 5. Hbase stores data in

A single filesystem available to all RegionServers

As many filesystems as the number of regionServers

One filesystem per column family

One filesystem per table.

HBase stores its data on a single file system. It assumes all the RegionServers have access to that file system across the entire cluster.

Que 6. The property which enables a fully distributed mode for Hbase is

hbase-cluster.distributed-all

hbase-cluster.distributed-enable

hbase-cluster.fully-distributed

hbase-cluster.distributedy


Free HBase Quiz Questions &amp; Answers 2018 Part 1

By default HBase runs in standalone mode. For a fully distributed configuration set the hbase-cluster.distributed property to true.

Que 7. The length of the name of the column family should be

As small as possible

Preferably one character

As large as possible

Does not matter

The colun family name should be ideally one character so that metadata associated with a cell is minimum.

Que 8. A coprocessor is executed when an event occurs. This type of coprocessor is known as

Observer

Listener

Master

Event handler

The observer type of coprocessor is executed when an event occurs.

Que 9. The metadata of region is accessed using the file named

.metainfo

.metaregion

.regioninfo

.regionmetainfo

Learn Hadoop from Industry Experts

The file .regioninfo stores the metadata information.

Que 10. In Hbase there are two situations when the WAL logfiles needs to be replayed. One is when the server fails. The other is when

The logs are full

Rows are deleted.

The cluster fails

Rows are updated

The only two instances when the logs are replayed is when cluster starts or the server fails.

Que 11. HBase is a distributed ________ database built on top of the Hadoop file system.

Column-oriented

Row-oriented

Tuple-oriented

None of the mentioned

HBase is a data model that is similar to Google’s big table designed to provide quick random access to huge amounts of structured data.

Que 12. Point out the correct statement :

HDFS provides low latency access to single rows from billions of records (Random access)

HBase sits on top of the Hadoop File System and provides read and write access

HBase is a distributed file system suitable for storing large files

None of the mentioned

One can store the data in HDFS either directly or through HBase. Data consumer reads/accesses the data in HDFS randomly using HBase.

Que 13. HBase is ________, defines only column families.

Row Oriented

Schema-less

Fixed Schema

All of the mentioned

HBase doesn’t have the concept of fixed columns schema

Que 14. Apache HBase is a non-relational database modeled after Google’s _________

BigTop

Bigtable

Scanner

FoundationDB


Free HBase Quiz Questions &amp; Answers 2018 Part 1

Bigtable acts up on Google File System, likewise Apache HBase works on top of Hadoop and HDFS.

Que 15. Point out the wrong statement :

HBase provides only sequential access of data

HBase provides high latency batch processing

HBase internally provides serialized access

All of the mentioned

HBase internally uses Hash tables and provides random access.

Que 16. The _________ Server assigns regions to the region servers and takes the help of Apache ZooKeeper for this task.

Region

Master

Zookeeper

all of the above

Master Server maintains the state of the cluster by negotiating the load balancing.

Que 17. Which of the following command provides information about the user ?

status

version

whoami

user

status command provides the status of HBase, for example,
          First Class GPUs support in Apache Hadoop 3.1, YARN & HDP 3.0      Cache   Translate Page   Web Page Cache   

This blog is also co-authored by Zian Chen and Sunil Govindan from Hortonworks.

Introduction Apache Hadoop 3.1, YARN, & HDP 3.0
First Class GPUs support in Apache Hadoop 3.1, YARN &amp; HDP 3.0
Without speed up with GPUs, some computations take forever! (Image from Movie “Howl’s Moving Castle”)

GPUs are increasingly becoming a key tool for many big data applications. Deep-learning / machine learning, data analytics , Genome Sequencing etc all have applications that rely on GPUs for tractable performance. In many cases, GPUs can get up to 10x speedups. And in some reported cases (like this ), GPUs can get up to 300x speedups! Many modern deep-learning applications directly build on top of GPU libraries like cuDNN (CUDA Deep Neural Network library). It’s not a stretch to say that many applications like deep-learning cannot live without GPU support.

Starting Apache Hadoop 3.1 and with HDP 3.0, we have a first-class support for operators and admins to be able to configure YARN clusters to schedule and use GPU resources.

Previously, without first-class GPU support, YARN has a not-so-comprehensive story around GPU support. Without this new feature, users have to use node-labels ( YARN-796 ) to partition clusters to make use of GPUs, which simply puts machines equipped GPUs to a different partition and requires jobs to be submitted that need GPUs to the specific partition. For a detailed example of this pattern of GPU usage, see Yahoo!’s blog post about Large Scale Distributed deep-learning on Hadoop Clusters .

Without a native and more comprehensive GPU support, there’s no isolation of GPU resources also! For example, multiple tasks compete for a GPU resource simultaneously which could cause task failures / GPU memory exhaustion, etc.

To this end, the YARN community looked for a comprehensive solution to natively support GPU resources on YARN.

First class GPU support on YARN GPU scheduling using “extensible resource-types “in YARN

We need to recognize GPU as a resource type when doing scheduling. YARN-3926 extends the YARN resource model to a more flexible model which makes it easier to add new countable resource-types. It also considers the related aspect of “resource profiles” which allow users to easily specify the resources they need for containers. Once we have GPUs type added to YARN, YARN can schedule applications on GPU machines. By specifying the number of requested GPU to containers, YARN can find machines with available GPUs to satisfy container requests.


First Class GPUs support in Apache Hadoop 3.1, YARN &amp; HDP 3.0
GPU isolation

With GPU scheduling support, containers with GPU request can be placed to machines with enough available GPU resources. We still need to solve the isolation problem: When multiple applications use GPU resources on the same machine, they should not affect each other.

Even if GPU has many cores, there’s no easy isolation story for processes sharing the same GPU. For instance, Nvidia Multi-Process Service (MPS) provides isolation for multiple process access the same GPU, however, it only works for Volta architecture, and MPS is not widely support by deep learning platforms yet. ,So our isolation, for now, is per-GPU device: each container can ask for an integer number of GPU devices along with memory, vcores (for example 4G memory, 4 vcores and 2 GPUs). With this, each application uses their assigned GPUs exclusively .

We use cgroups to enforce the isolation. This works by putting a YARN container a process tree into a cgroup that allows access to only the prescribed GPU devices. When Docker containers are used on YARN, nvidia-docker-plugin an optional plugin that admins have to configure is used to enforce GPU resource isolation.

GPU discovery

For properly doing scheduling and isolation, we need to know how many GPU devices are available in the system. Admins can configure this manually on a YARN cluster. But it may also be desirable to discover GPU resources through the framework automatically. Currently, we’re using Nvidia system management interface (nvidia-smi) to get number of GPUs in each machine and usages of these GPU devices. An example output of nvidia-smi looks like below:


First Class GPUs support in Apache Hadoop 3.1, YARN &amp; HDP 3.0
Web UI

We also added GPU information to the new YARN web UI. On ResourceManager page, we show total used and available GPU resources across the cluster along with other resources like memory / cpu.


First Class GPUs support in Apache Hadoop 3.1, YARN &amp; HDP 3.0

On NodeManager page, YARN shows per-GPU device usage and metrics:


First Class GPUs support in Apache Hadoop 3.1, YARN &amp; HDP 3.0
Configurations

To enable GPU support in YARN, administrators need to set configs for GPU Scheduling and GPU isolation.

GPU Scheduling

(1) yarn.resource-types in resource.type.xml

This gives YARN a list of available resource types supported for user to use. We need to add “yarn.io/gpu” here if we want to support GPU as a resource type

(2) yarn.scheduler.capacity.resource-calculator in capacity-scheduler.xml

DominantResourceCalculator MUST be configured to enable GPU scheduling. It has to be set to, org.apache.hadoop.yarn.util.resource.DominantResourceCalculator

GPU Isolation

(1) yarn.nodemanager.resource-plugins in yarn-site.xml

This is to enable GPU isolation module on NodeManager side. By default, YARN will automatically detect and config GPUs when above config is set. It should also add “yarn.io/gpu”

(2) yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices in yarn-site.xml

Specify GPU devices which can be managed by YARN NodeManager, split by comma Number of GPU devices will be reported to RM to make scheduling decisions. Set to auto (default) to let YARN automatically discover GPU resource from system.

Manually specify GPU devices if auto detect GPU device failed or admin only wants a s
          Fast Track to Optimize Your Enterprise Data Warehouse      Cache   Translate Page   Web Page Cache   

Enterprise Data Warehouse (EDW) is traditionally used for generating reports and answering pre-defined queries, where workloads and requirements for service level are static. The drawback is that the platforms impose rigidity, because the schemas must be modeled in advance for queries that are anticipated. Constrained by this limitation, users cannot freely explore and ask questions from their data to enable timely responses and insights that drive the speed of business required to stay competitive today.

“Warehouse by the Lake” Complementary Approach

With the supplement of Apache Hadoop, it not only contains the growing costs of running enterprise data warehouses, but also gives users the flexibility and reusability over the consumption of data with the introduction of schema-on-read. When Hadoop is used to optimize EDW, organizations can get the best of both worlds with the use of the EDW for standard operational queries, and Hadoop for exploratory analytics and workload shift.

Hadoop provides a versatile and extensible analytic platform that uses commodity hardware and open source innovation to deliver economies of scale. Enterprise Data Warehouse (EDW) optimization , where data- and compute-intensive processes are offloaded from the EDW to Hadoop, has proven to be one of the most popular use cases for the open source platform. EDW optimization is often one of the first use cases for Hadoop because it can readily deliver tangible results, thanks to:

Cost savings delivered by commodity infrastructure and open source software. Proven capability to perform at scale. Innovations that have brought interactive BI to Hadoop. Productivity gains attributable to more efficient data enrichment and correlation.

However, the flip side is that making the proper configurations can be time-consuming because of the lack of expertise in integrating Hadoop into existing environments.

Introducing a Prescriptive Solution

The Hortonworks solution for EDW optimization addresses the need for an ideal configuration while capitalizing on Hadoop’s versatility. The solution enables customers unfamiliar with Hadoop to gain immediate proof of value with EDW optimization through a guided, fixed term, fixed-scope engagement, for the delivery of a full Hadoop platform that will grow with customer needs.

Beyond that, the one-month jumpstart engagement, which bundles services, software, and integrations, offers a prescriptive best-of-breed solution that includes:

Hortonworks Data Platform (HDP) as the open source Apache Hadoop distribution, Syncsort DMX-h for data integration, Jethro Data as the high-performance analytic engine, and Hortonworks Professional Services as the center of excellence ensuring the implementation is on-time and on-target.

The solution guides customers through a “recipe” that generates production-ready online analytical processing (OLAP) cubes to which they can connect their designated BI tools. This encompasses rehosting data and ETL processes from the data warehousing environment onto the Hortonworks Data Platform (HDP), helping customers configure Hadoop and installing partner tooling for data integration and OLAP.

To learn more about the Hortonworks solution that can help you right-size your EDW in a time-efficient manner for accelerated time-to-value, please read the white paper below:

Using Hadoop to Optimize the Enterprise Data Warehouse


          Software Engineer - Big Data - Charles Schwab - Westlake, TX      Cache   Translate Page   Web Page Cache   
1+ years of experience big data technologies – Apache STORM, MAPR, Hbase, Hadoop, Hive. Westlake - TX, TX2050R, 2050 Roanoke Road, 76262-9616....
From Charles Schwab - Sat, 04 Aug 2018 10:53:48 GMT - View all Westlake, TX jobs
          El Freaky Ft. Afro Bros, Feid, Apache, Toby Letra, Stanley Jackson – NMF      Cache   Translate Page   Web Page Cache   
Artista: El Freaky Ft. Afro Bros, Feid, Apache, Toby Letra, Stanley Jackson Tema: NMF
          Find Geolocation with Seeker with High Accuracy – Kali Linux 2018      Cache   Translate Page   Web Page Cache   

With the help of Seeker which is an open source python script, you can easily find the geolocation of any device with high accuracy along with device information like Resolution, OS Name, Browser, Public IP, Platform etc. Seeker uses Ngrok (for tunnelling) and creates a fake apache web server (on SSL) which asks for location […]

The post Find Geolocation with Seeker with High Accuracy – Kali Linux 2018 appeared first on Yeah Hub.


          Développeur Java/JEE - Voonyx - Lac-beauport, QC      Cache   Translate Page   Web Page Cache   
Java Enterprise Edition (JEE), Eclipse/IntelliJ/Netbeans, Spring, Apache Tomcat, JBoss, WebSphere, Camel, SOAP, REST, JMS, JPA, Hibernate, JDBC, OSGI, Servlet,...
From Voonyx - Thu, 26 Jul 2018 05:13:45 GMT - View all Lac-beauport, QC jobs
          Java/JEE Developer - Voonyx - Lac-beauport, QC      Cache   Translate Page   Web Page Cache   
Java Enterprise Edition (JEE), Eclipse/IntelliJ/Netbeans, Spring, Apache Tomcat, JBoss, WebSphere, Camel, SOAP, REST, JMS, JPA, Hibernate, JDBC, OSGI, Servlet,...
From Voonyx - Thu, 26 Jul 2018 05:13:41 GMT - View all Lac-beauport, QC jobs
          Conseiller en architecture technologique web - iA Groupe financier - Québec City, QC      Cache   Translate Page   Web Page Cache   
Expert des technologies comme Microsoft IIS, API Gateway, IBM WebSphere, Netscaler, Apache, IBM MQ Series, WebService ou toutes autres technologies pertinentes...
From iA Financial Group / iA Groupe financier - Fri, 08 Jun 2018 06:16:49 GMT - View all Québec City, QC jobs
          A Look Inside Caddy, a Web Server Written in Go      Cache   Translate Page   Web Page Cache   
Caddy is a unique web server with a modern feature set. Think nginx or Apache, but written in Go. With Caddy, you can serve your websites over HTTP/2. It can act as a reverse proxy and load balancer. Front your PHP apps with it. You can even deploy your site with git push. Cool, right? Caddy serves the Gopher Academy websites, including this blog. Go ahead, check out the response headers.
          Re: Redirect http into https using load balancer?      Cache   Translate Page   Web Page Cache   
Oldish post, but people still seem to have problems with this one. I'll set out one more solution that works for us. No need to config apache for anything.
...
          TVS Apache RTR 160 4V- All You Need To Know About This Bike      Cache   Translate Page   Web Page Cache   

Check out the price, features, specifications, and features of the 2018 TVS Apache RTR 160 4V. It is a 160cc naked street bike, which looks much different than the regular RTR 160. The prices of the bike start from Rs 81,490 (ex-showroom Delhi). Read ahead to get more details on this bike.  The 2018 TVS Apache […]

The post TVS Apache RTR 160 4V- All You Need To Know About This Bike appeared first on CarBlogIndia.


          Offer - Java Spring Boot MicroServices Training at FuturePoint - INDIA      Cache   Translate Page   Web Page Cache   
SPRING BOOT Pre Requisite : Core java and Some Spring concept Knowledge SPRING BOOT ·  Spring Boot Starters ·  Spring Boot Auto-configuration ·  Spring Boot Actuators ·  Spring Boot MVC ·  Spring Boot Test SPRING MICRO SERVICES ·  Introduction ·  Evaluation of Micro Services ·  Principles Of Micro Services ·  Characteristics of Micro Services ·  Micro services Benefits ·  Relationship with SOA ·  Twelve Factor Apps ·  Micro Services use cases ·  Micro Services early adopters ·  Building micro services with boot ·  Micro Services Capability model ·  Micro Services Use case SPRING CLOUD · Spring Config Server · Spring Cloud Bus · Feign Rest client · Load Balancing Using Ribbon · Registry Using Eureka server SPRING JPA  Application Managed Container  Entity Managed Container  Application SPRING DATA SPRING MESSAGING  JMS / AMQP  ActiveMQ / RabbitMQ Server Courses Offerings Amazon Web Services Android AIX Administration Business Analyst Build and Release CA Siteminder CCNA, CCNP Security, CCIE , CheckPoint Citrix XenApp Cognos 10 BI & Tm1 Crystal Reports Data Stage DB2 DBA Dell Bhoomi Dev Ops Dot Net Google Web Tool Kit Golden Gate Hadoop Hyperion Essabase, Planning, HFR , HFM , DRM IBM Websphere Commerce server Admin IBM Lotus Notes (Development) IBM Lotus Notes Domino Server Administration IBM Message Broker IBM MQ Series Administration IBM Netezza DBA & Development IBM Tivoli Access Manager IBM Web Sphere Application Server Administration (WAS) IBM Websphere Transformation extender (WTX 8.2) IBM Integration BUS ( IIB ) Informatica I Phone Swift Language training Java/J2EE JAVA UI Java Apache Wicket JIRA Linux Administration training Mango DB MicroSoft .NET Technologies (VB.NET, C#, ASP.NET, Wcf ,Wpf ,Mvc) Microstrategy MicroSoft Business Intelligence MSBI MS Power BI OBIEE 11 g , 12c ODI ( Oracle Data Integrator) Openstack Oracle FUSION APPS SCm / HCM / Financial Oracle APPS – HRMS, SCM, Manufacturing , Technical , ASCP .Dmantra Oracle APPS – Project Accounting Oracle APPS - iProcurement , iSupplier, Sourcing OAF Oracle BI Apps Oracle BI Publisher Oracle DBA 11g Oracle RAC , Data Guard , Performance Tuning, Oracle Fusion SOA Oracle SQL , PL SQL People soft Functional and Technical PHP Perl Scripting Qlikview RSA Archer Security Operations Management (SecOps) Essentials RUBY Cucumber SAP SD , BO , FICO , BI / BW , APO , BPC, BASIS , SRM , MM, ISOil, BODS SAP Simple Finance SAS Sales Force CRM Service NOW SharePoint Server 2010 Shell Scripting SQL Server DBA Springs and Hibernate Storage Area Network ( SAN) Tableau Team Foundation Server Tera Data Testing Tools - QTP, QC, Load Runner, Selenium, ISTQB TIBCO BW, BE, TIBCO I Process , BPM Tivoli Access Manager & Tivoli Storage Manager Unix & Linux Administration VMWare WCF, WPF, LINQ, AJAX, SILVER LIGHT Webservices , SOAP , REST ( JAVA) Windows 2012 server Drop a mail info@futurepointtech.com we will get in touch with u http://www.futurepointtech.com/spring-boot.html
          阿里巴巴宣布开源限流降级中间件:Sentinel      Cache   Translate Page   Web Page Cache   

image

近日,阿里巴巴中间件团队宣布开源 Sentinel,并发布了首个社区版本v0.1.0。GitHub地址为:https://github.com/alibaba/Sentinel 。

关于Sentinel,阿里巴巴给出的描述比较简单:

A lightweight flow-control library providing high-available protection and monitoring (高可用防护的流量管理框架)。

Sentinel是什么?

随着微服务的流行,服务和服务之间的稳定性变得越来越重要。Sentinel 以流量为切入点,从流量控制、熔断降级、系统负载保护等多个维度保护服务的稳定性。说的简单一点,Sentinel是一个对资源调用的控制组件,主要涵盖限流、降级、负载保护等功能模块。

image

Sentinel于2012年诞生,第一个版本的主要功能为入口流量控制。在之后的6年里,Sentinel 在阿里巴巴集团内部迅速发展,成为基础技术模块,覆盖了所有的核心场景。Sentinel 也因此积累了大量的流量归整场景以及生产实践。

如今,阿里巴巴决定把Sentinel开源,可以说是对开源社区的一个重大贡献。

在开源的同时,阿里巴巴还宣布把 Sentinel 的适配器捐给了Dubbo,进一步完善了 Dubbo 生态。

在复杂的生产环境下可能部署着成千上万的 Dubbo 服务实例,流量持续不断地进入,服务之间进行相互调用。但是分布式系统中可能会因流量激增、系统负载过高、网络延迟等一系列问题,导致某些服务不可用,如果不进行相应的控制可能导致级联故障,影响服务的可用性,因此如何对流量进行合理的控制,成为保障服务稳定性的关键。随着Sentinel的开源,对于使用Dubbo构建微服务的企业和开发团队来说是一大福音。

image

关于Sentinel接入Dubbo的教程,可以参考:http://dubbo.incubator.apache.org

Sentinel 的特征

丰富的应用场景: Sentinel承接了阿里巴巴近10年的双十一大促流量的核心场景,例如秒杀,即突发流量控制在系统容量可以承受的范围;消息削峰填谷;实时熔断下游不可用应用,等等。

完备的监控功能: Sentinel同时提供最实时的监控功能,您可以在控制台中看到接入应用的单台机器秒级数据,甚至 500 台以下规模的集群的汇总运行情况。

简单易用的扩展点: Sentinel提供简单易用的扩展点,您可以通过实现扩展点,快速的定制逻辑。例如定制规则管理,适配数据源等。

Sentinel分为两个部分:

服务端基于Spring Boot开发,打包后可以直接运行,不需要额外安装Tomcat等应用容器。

Java客户端不依赖任何框架,能够运行于所有Java运行时环境,同时对Spring/Spring Boot环境也有较好的支持。 服务端基于Spring Boot和Spring Cloud开发,打包后可以直接运行,不需要额外安装Tomcat等应用容器。

Sentinel 的功能

限流

当我们设计了一个函数,准备上线,这时候这个函数会消耗一些资源,处理上限是1秒服务3000个QPS,但如果实际情况遇到高于3000的QPS该如何解决呢?Sentinel提供了两种流量统计方式,一种是统计并发线程数,另外一种则是统计 QPS,当并发线程数超出某个设定的阈值,新的请求会被立即拒绝,当QPS超出某个设定的阈值,系统可以通过直接拒绝、冷启动、匀速器三种方式来应对,从而起流量控制的作用。

降级

接触过Spring Cloud、Service Mesh的同学,都知道熔断降级的概念。服务之间会有相互依赖关系,例如服务A做到了1秒上万个QPS,但这时候服务B并无法满足1秒上万个QPS,那么如何保证服务A在高频调用服务B时,服务B仍能正常工作呢?一种比较常见的情况是,服务A调用服务B时,服务B因无法满足高频调用出现响应时间过长的情况,导致服务A也出现响应过长的情况,进而产生连锁反应影响整个依赖链上的所有应用,这时候就需要熔断和降级的方法。Sentinel通过并发线程数进行限制和响应时间对资源进行降级两种手段来对服务进行熔断或降级。

塑形

通常我们遇到的流量具有随机性、不规则、不受控的特点,但系统的处理能力往往是有限的,我们需要根据系统的处理能力对流量进行塑形,即规则化,从而根据我们的需要来处理流量。Sentinel通过资源的调用关系、运行指标、控制的效果三个维度来对流量进行控制,开发者可以自行灵活组合,从而达到理想的效果。

负载保护

平时系统运行都没问题,但遇到大促的时候,发现机器的load非常高,这时候对系统的负载保护就显得非常重要,以防止雪崩。Sentinel 提供了对应的保护机制,让系统的入口流量和系统的负载达到一个平衡,保证系统在能力范围之内处理最多的请求。需要注意的是,Sentinel在系统负载保护方面的判断机制是根据系统能够处理的请求,和允许进来的请求,来做平衡,而不是根据一个间接的指标(系统load)来做限流。因为我们最终追求的目标是在系统不被拖垮的情况下,提高系统的吞吐率,而不是load一定要到低于某个阈值。

以上就是关于Sentinel的介绍。更多资讯和接入指南请扫描下方二维码,关注漫话编程。

漫话编程

微信:mhcoding

image

让编程变得有乐趣

长按二维码关注


          Gosec:Go语言源码安全分析工具      Cache   Translate Page   Web Page Cache   

gosec是一个Go语言源码安全分析工具,其通过扫描Go AST(抽象语法树)来检查源代码是否存在安全问题。

许可证

根据Apache 2.0版本的License;除非符合许可,否则你将不能使用该文件。你可以在这里获取到一个许可证的副本。

安装

$ go get github.com/securego/gosec/cmd/gosec/...

使用

我们可以将Gosec配置为仅运行某个规则子集,如排除某些文件路径,生成不同格式的报告等。在默认情况下,Gosec将对提供的输入文件运行所有规则。要从当前目录递归扫描,你可以提供’./…’ 作为输入参数。

选择规则

默认情况下,gosec将针对提供的文件路径运行所有规则。但如果你要指定运行某个规则,则可以使用 ‘-include=’ 参数,或者你也可以使用 ‘-exclude=’来排除那些你不想运行的规则。

可用规则

G101:查找硬编码凭证

G102:绑定到所有接口

G103:审计不安全区块的使用

G104:审计错误未检查

G105:审计math/big.Int.Exp的使用

G106:审计ssh.InsecureIgnoreHostKey的使用

G201:SQL查询构造使用格式字符串

G202:SQL查询构造使用字符串连接

G203:在HTML模板中使用未转义的数据

G204:审计命令执行情况

G301:创建目录时文件权限分配不合理

G302:chmod文件权限分配不合理

G303:使用可预测的路径创建临时文件

G304:作为污点输入提供的文件路径

G305:提取zip存档时遍历文件

G401:检测DES,RC4或MD5的使用情况

G402:查找错误的TLS连接设置

G403:确保最小RSA密钥长度为2048位

G404:不安全的随机数源(rand)

G501:导入黑名单列表:crypto/md5

G502:导入黑名单列表:crypto/des

G503:导入黑名单列表:crypto/rc4

G504:导入黑名单列表:net/http/cgi

# Run a specific set of rules
$ gosec -include=G101,G203,G401 ./...

# Run everything except for rule G303
$ gosec -exclude=G303 ./...

注释代码

与所有自动检测工具一样,gosec也会出现误报的情况。如果gosec报告已手动验证为安全的,则可以使用“#nosec”来注释代码。

注释将导致gosec停止处理AST中的任何其他节点,因此可以应用于整个块或应用于单个表达式中。

import "md5" // #nosec


func main(){

    /* #nosec */
    if x > y {
        h := md5.New() // this will also be ignored
    }

}

在某些情况下,你可能还需要重新访问已使用#nosec注释的位置。那么你可以执行以下命令来运行扫描程序以及忽略#nosec注释:

$ gosec -nosec=true ./...

build标签

gosec能够将Go构建标签传递给分析器。它们可以以逗号分隔的列表提供,如下所示:

$ gosec -tag debug,ignore ./...

输出格式

gosec目前支持text,json,yaml,csv和JUnit XML的输出格式。默认情况下,结果将以stdout(标准输出)。但我们也可以使用 ‘-fmt’参数指定输出格式,以及’-out’来指定输出文件。

# Write output in json format to results.json
$ gosec -fmt=json -out=results.json *.go

开发

按照此处的说明安装dep:https://github.com/golang/dep 

安装最新版本的golint:https://github.com/golang/lint

Build

make

Tests

make test

发布版本

确保你已安装了goreleaser,然后你可以按以下方式发布gosec:git tag 1.0.0 export GITHUB_TOKEN= make release

dist文件夹中提供了该工具的已发布版本。build信息应该会被展示在usage文本中。

./dist/darwin_amd64/gosec -h
gosec  - Golang security checker

gosec analyzes Go source code to look for common programming mistakes that
can lead to security problems.

VERSION: 1.0.0
GIT TAG: 1.0.0
BUILD DATE: 2018-04-27T12:41:38Z

注意,所有已发布的存档也会被同步到GitHub上。

Docker image

你可以执行一个发布版本并build docker镜像:

git tag <VERSION>
export GITHUB_TOKEN=<Your GitHub token>
make image

在容器中运行gosec:

docker run -it -v <YOUR LOCAL WORKSPACE>:/workspace gosec /workspace

生成TLS规则

可以从Mozilla的TLS ciphers建议生成TLS规则配置。

首先,你需要安装generator工具:

go get github.com/securego/gosec/cmd/tlsconfig/...

现在你可以在项目的根目录中调用go generate:

go generate ./...

这将生成一个rules/tls_config.go文件,其中包含来自Mozilla的当前ciphers建议。

 *参考来源:githubFB小编 secist 编译,转载请注明来自FreeBuf.COM


          Mogie-yawn      Cache   Translate Page   Web Page Cache   

In reply to Mogollon, N.M.: 1940:

Never heard "Mogollon" pronounced by a native Spanish speaker, but the locals near the Mogollon Rim county in AZ and NM call it "mogie-yawn." Named for the Spanish Governor of New Mexico in the early 18th century, the escarpment is cut with canyons and crested with the largest ponderosa pine forest on this planet.

A beautiful place and a favored recreation locale of mine, especially the area that straddles the AZ/NM state line. Little native Apache trout live in the streams, black bear and gray wolves in the forested areas and huge elk graze the open parks. Sublime.

[See you at Big Lake. - Dave]


          Apache Corp. to form new energy transit, storage and marketing company      Cache   Translate Page   Web Page Cache   
U.S. energy company Apache Corp. said it was forming a strategic partnership to form a $3.5 billion pipeline company meant to cater to Texas shale oil and gas.
          Offer - Java Spring Boot MicroServices Training at FuturePoint - INDIA      Cache   Translate Page   Web Page Cache   
SPRING BOOT Pre Requisite : Core java and Some Spring concept Knowledge SPRING BOOT ·  Spring Boot Starters ·  Spring Boot Auto-configuration ·  Spring Boot Actuators ·  Spring Boot MVC ·  Spring Boot Test SPRING MICRO SERVICES ·  Introduction ·  Evaluation of Micro Services ·  Principles Of Micro Services ·  Characteristics of Micro Services ·  Micro services Benefits ·  Relationship with SOA ·  Twelve Factor Apps ·  Micro Services use cases ·  Micro Services early adopters ·  Building micro services with boot ·  Micro Services Capability model ·  Micro Services Use case SPRING CLOUD · Spring Config Server · Spring Cloud Bus · Feign Rest client · Load Balancing Using Ribbon · Registry Using Eureka server SPRING JPA  Application Managed Container  Entity Managed Container  Application SPRING DATA SPRING MESSAGING  JMS / AMQP  ActiveMQ / RabbitMQ Server Courses Offerings Amazon Web Services Android AIX Administration Business Analyst Build and Release CA Siteminder CCNA, CCNP Security, CCIE , CheckPoint Citrix XenApp Cognos 10 BI & Tm1 Crystal Reports Data Stage DB2 DBA Dell Bhoomi Dev Ops Dot Net Google Web Tool Kit Golden Gate Hadoop Hyperion Essabase, Planning, HFR , HFM , DRM IBM Websphere Commerce server Admin IBM Lotus Notes (Development) IBM Lotus Notes Domino Server Administration IBM Message Broker IBM MQ Series Administration IBM Netezza DBA & Development IBM Tivoli Access Manager IBM Web Sphere Application Server Administration (WAS) IBM Websphere Transformation extender (WTX 8.2) IBM Integration BUS ( IIB ) Informatica I Phone Swift Language training Java/J2EE JAVA UI Java Apache Wicket JIRA Linux Administration training Mango DB MicroSoft .NET Technologies (VB.NET, C#, ASP.NET, Wcf ,Wpf ,Mvc) Microstrategy MicroSoft Business Intelligence MSBI MS Power BI OBIEE 11 g , 12c ODI ( Oracle Data Integrator) Openstack Oracle FUSION APPS SCm / HCM / Financial Oracle APPS – HRMS, SCM, Manufacturing , Technical , ASCP .Dmantra Oracle APPS – Project Accounting Oracle APPS - iProcurement , iSupplier, Sourcing OAF Oracle BI Apps Oracle BI Publisher Oracle DBA 11g Oracle RAC , Data Guard , Performance Tuning, Oracle Fusion SOA Oracle SQL , PL SQL People soft Functional and Technical PHP Perl Scripting Qlikview RSA Archer Security Operations Management (SecOps) Essentials RUBY Cucumber SAP SD , BO , FICO , BI / BW , APO , BPC, BASIS , SRM , MM, ISOil, BODS SAP Simple Finance SAS Sales Force CRM Service NOW SharePoint Server 2010 Shell Scripting SQL Server DBA Springs and Hibernate Storage Area Network ( SAN) Tableau Team Foundation Server Tera Data Testing Tools - QTP, QC, Load Runner, Selenium, ISTQB TIBCO BW, BE, TIBCO I Process , BPM Tivoli Access Manager & Tivoli Storage Manager Unix & Linux Administration VMWare WCF, WPF, LINQ, AJAX, SILVER LIGHT Webservices , SOAP , REST ( JAVA) Windows 2012 server Drop a mail info@futurepointtech.com we will get in touch with u http://www.futurepointtech.com/spring-boot.html
          West Virginia and the Voatz “blockchain” voting system ― scaling and security ...      Cache   Translate Page   Web Page Cache   

In May, West Virginia ran a limited pilot programme using Voatz’ “blockchain” voting system, which Iwrote about in June ― it’s actually a mobile phone voting system, with a blockchain tacked on the side. This was for military people who were eligible to vote in Harrison and Monongalia Counties, but were stationed overseas.

West Virginia were sufficiently impressed to use the Voatz system again, for this November’s mid-term elections. This was reported on local WVNews sites at the end of July, but exploded when CNN reported it yesterday.

And my June post took off again, my Twitter mentions melted, and I was quoted in a Vanity Fair article today on the kerfuffle. So what’s going on here?


West Virginia and the Voatz “blockchain” voting system ― scaling and security ...
Why would you run a mobile phone vote?

Mobile phone voting sounds like an obviously terrible idea in all sorts of ways. But they need to solve a genuine problem:

“Think of a soldier on a hillside in Afghanistan or a sailor under the polar ice caps. They don’t have access to U.S. mail. Sometimes they’re in a classified area such as a nuclear sub or simply don’t have access to scanners, fax machines and that sort of thing. They do have access to the internet, mobile devices. It’s a tremendous solution to a very difficult problem and with West Virginia having the highest per capita volunteers in the U.S. military, we owe it to them.” “I’ve had voters who have overnighted to our jurisdiction and paid over $50 to do so, and it still didn’t get back to us by voting day.”

The voters are identified by biometrics. The Voatz system will be limited to military personnel on deployment ― people whose biometrics are thoroughly known and documented. It’s entirely optional, and soldiers can use a conventional paper vote instead if they want to.

The pilot programme in May wasn’t huge ― literally 11 voters from Monongalia County used the system. “I think all 11 military voters who used it in our county were pleased with it.”

Mobile phone voting: “a horrific idea”

Obviously, Voatz want to expand mobile phone voting. But the notion is controversial, to say the least:

“Mobile voting is a horrific idea,” Joseph Lorenzo Hall, the chief technologist at the Center for Democracy and Technology, told CNN in an email. “It’s internet voting on people’s horribly secured devices, over our horrible networks, to servers that are very difficult to secure without a physical paper record of the vote.” Marian K. Schneider, president of the election integrity watchdog group Verified Voting, was even more blunt. Asked if she thought mobile voting is a good idea, she said, “The short answer is no.”

If mobile phone voting can be usably secure at all, it will only be in a small and highly constrained system such as these pilot programmes.

How the blockchain bit works

The “blockchain” part of Voatz’ system is functionally superfluous ― it’s a ledger of the votes, kept on a four-node Hyperledger instance run entirely by the company. So it’s another single-user “blockchain” being used as a clustered database.

I must note that Voatz disagree with this characterisation, referring me to the FAQ on wvexperience.voatz.com (go to the page, click “Blockchain & Security” on the left):

Once the voter is verified, Election jurisdictions start the process by sending a qualified voter a mobile ballot. Contained in the mobile ballot are “tokens” ― think of them as potential votes ― which are cryptographically tied to a candidate or ballot measure question. The number of tokens a given voter receives is the same as the number of ovals he or she would have received on a paper ballot handed out at the voter’s precinct or sent through the mail. The voter makes selections on the Voatz app on their smartphone. As they make selections, it alters the tokens with their selections (like filling in a ballot oval). Overvotes are prevented, as each voter only receives a total number of tokens as they have potential votes. Once submitted, the votes for choices on the ballot are verified by multiple distributed verifying servers called “verifiers” or validating nodes. Upon verification, the token is debited (i.e. subtracted) from the voter’s ledger and credited (i.e. added) to the candidate’s ledger. The blockchain on every verifier is automatically updated and the process repeats as additional voters submit their selections. The Voatz blockchain is built using the HyperLedger blockchain framework. The minimum number of validating nodes used is 4. These get expanded to 16 for the pilot as needed depending on the anticipated number of participants. Additional scaling is planned for the future.

Though I still think this constitutes a private clustered database ― and certainly as long as Voatz control all verification nodes, or even if they control who gets to run a verification node.

The token arrangement seems bizarrely convoluted and gratuitous ― cryptographic tokens are widely used, work well, and they don’t need a blockchain. This still feels to me like implementing a naturally-centralised system on a blockchain because you want to say you used a blockchain.

The functional aspect of the blockchain bit is promotional:

Secretary of State deputy legal counsel and elections officer Donald Kersey said this means votes on Voatz become immutable and tamper proof, with records virtually impossible to crack.

Anyone reading this knows that none of that automatically follows from bolting a blockchain onto the side of your system.

There’s also a huge problem with the idea of recording the votes themselves on a permanent ledger. Joseph Lorenzo Hall in Vanity Fair asks you to “imagine that in 20 years, the entire contents of your ballot are decryptable and publicly available” ― rather than on pieces of paper that can’t be traced back to you personally.

Voatz in Utah, April 2018 ― 1400 voters go back to using paper

One thing that has to work with absolutely 100% reliability is voters being able to vote at all.

Tony Adams notes the 14 April 2018 Republican County Convention in Utah County, Utah, a caucus with about 1400 voters. They tried using Voatz, and it scaled so badly that they had to revert to using paper ballots.

Here’s some voter reviews:

This app is terrible. Good thing there were backup paper ballots … seriously awful Just wow! What an epic failure of an app. I had to sign up several times, validate, scan and wait wait wait for a “connection issue”. Me and the 1400 ish Delegates ended up doing paper ballots which made our convention go several hours overtime. After going through the lengthy and counter-intuitive verification process, I could not understand the directions and ended up calling them over the phone before the Utah County Republican precinct caucus meeting. I was exited to vote and still be with my kids. When voting was supposed to happen the server was over loaded. Eventually the app stopped working. I had to reinstall and reverify. Could not vote. The next day I come to find out my precinct gave up on the app and just used paper ballots instead. Major let down. Bye the way it also failed during many local caucus meetings a few weeks before. Out of 273 caucus meetings it only worked for three of them. Voatz’ security embarrassments

Election manipulation is, of course, huge news at the moment. So Voatz should have expected tremendous scrutiny of their security and technological transparency, in every detail.

It’s unfortunate they had an old server still up ― always remember to stop your old AWS instances!― for Kevin Beaumont to find at a glance:

The Voatz website is running on a box with out of date SSH, Apache (multiple CVSS 9+), php etc. Pop3 to the Internet, NTP, PHP3, Plesk from 2009. The database (on Azure) has an admin panel on port 8080, no SSL. I’m off to bed.


West Virginia and the Voatz “blockchain” voting system ― scaling and security ...
The United States needs some form of vetting process for online voting in elections. I’m a foreign dude with an avatar of a cowboy porg riding a porg dog on Twitter who appears to have done more investigation of the security implications of this than anybody. Bonkers, America.

If a startup (I’m sure they’re nice people btw) with 2m in funding approaches and says they have biometric security and Blockchain it still need independent vetting, at least to level a crab paste company would get a HR provider. There needs to be oversight here.

I can’t even find a Voatz CISO (or security person) to report stuff to. They have long unpatched boxes and weird services online, this wouldn’t pass a crab paste company pentest.

I used to work for a crab paste company with little to no IT budget, I wouldn’t have accepted this into production, but apparently the world’s most prosperous nation will.

Voatz say this was an old test site ― but leaving exploitable old servers up is a gateway to your new stuff. Did they check that nobody could get from the old server to the new servers? Are they in different Amazon VPCs?

Crucially, I find it unlikely that if you're running a Plesk from 2009 and a run of the mill poorly written PHP app on the user facing site that your security is all that great on the backend. There's at least someone in the org that is totally fine with an exploitable site.

― Keith Gable (@ZiggyTheHamster) August 7, 2018

Voatz claim the West Virginia election site was audited by Security Innovation , Ingalls Information Security , Hacker One , Comodo/HackerGuardian and Qualys SSL Labs.

Kevin asked them about this, and says that “One of the companies listed as providing a security audit says they did not provide a security audit.”

Hacker One just means Voatz have a bug bounty programme ― though I couldn’t find where they’ve listed it.

Qualys just provides a free SSL server test for any public website ― and Voatz do seem to mean the free SSL test, as the free test of their website was the link they provided to Vanity Fair as a sample of their security practices.

In fact, Voatz tweeted this quick SSL server test as evidence their servers had passed penetration tests.

Yes, you can do a quick self verification SSL test here to get a sample of that https://t.co/7GEZRPqdXX

We always appreciate constructive feedback to improve.

― Voatz (@Voatz) August 7, 2018

Summary

To be fair, the Twitter is probably just the social media person, having an absolutely terrible day ― not one of the technical people. But they need to get the techies on the job straight away.

The failure to scale in Utah is a serious problem, though overseas military voters are likely to be a small enough use case for the system to cope.

But mobile phone voting worries people a lot.

Voatz need to put out public reports ― as fully detailed and transparent as is feasible ― on every aspect of the entire system, as soon as they can.

Treat every scornful tweet today as a pointer to an opportunity to excel. A chance to restore confidence.

@Voatz ,
          Oh, No, Not Another Security Product      Cache   Translate Page   Web Page Cache   

Let's face it: There are too many proprietary software options. Addressing the problem will require a radical shift in focus.

Organizations and businesses of all types have poured money into cybersecurity following high-profile breaches in recent years. The cybercrime industry could be worth $6 trillion by 2022, according to some estimates , and investors think that there's money to be made. But like generals fighting their last battle, many investors are funding increasingly complex point solutions while buyers cry out for greater simplicity and flexibility.

Addressing this problem requires a radical shift in focus rather than a change of course. Vendors and investors need to look beyond money and consider the needs of end users.

More Money, More Problems

London's recent Infosecurity conference included more than 400 vendors exhibiting, while RSA in San Francisco boasted more than 600. And this only includes those with the marketing budgets and inclination to exhibit. One advisory firm claims to track 2,500 security startups , double the number of just a few years ago. Cheap money has created a raft of companies with little chance of IPO or acquisition, along with an even greater number of headaches for CISOs trying to make sense of everything.

The market is creaking from this trend, with Reuters reporting mergers and acquisitions down 30% in 2017, even as venture capital investment increased by 14%. But the real pain is being felt by CISOs trying to integrate upward of 80 security solutions in their cyber defenses, as well as overworked analysts struggling to keep up. The influx of cash also has caused marketing budgets to spike, leading to a market in which it is deemed acceptable for increasingly esoteric products to be promoted to CISOs as curing everything.

All of this feeds into a sense of "product fatigue" where buyers are frightened into paying for the latest black box solution, only to see their blood pressure spike when they find that they don't have the necessary resources to deploy or support these tools. This situation does not benefit any of the parties ― the overwhelmed CISO, the overly optimistic investors, or the increasingly desperate vendors caught in limbo between funding rounds when their concepts weren't fully baked to begin with.

Addressing complex modern threats calls for sophisticated tools and products, but we cannot fight complexity with complexity. Security operations center teams cannot dedicate finite analyst capacity to an ever-expanding battery of tools. Fragmentation within the security suite weakens company defenses and the industry as a whole, and the drain on analysts' time detracts from crucial areas such as basic resilience and security hygiene.

Platforms, Not Products

The industry doesn't need more products, companies, or marketing hype. We need an overhaul of the whole approach to security solutions, not an improvement of components. Security should be built on platforms with a plug-and-play infrastructure that better supports buyers, connecting products in a way that isn't currently possible.

Such platforms should be flexible and adaptable, rewarding vendor interoperability while punishing niche solutions that cannot be easily adopted. This would lead to collaboration within the industry and create a focus on results for end users, rather than increasingly blinkered product road maps. Such platforms could act as a magnifying glass for innovation, providing a sandbox to benchmark new technologies and creating de facto security standards in the process.

This move from proprietary architecture to open modular architecture is a hallmark of Clayton Christensen's disruptive innovation theory, and it is long overdue within the security industry. Buyers will have greater control of their tech stacks, while vendors and investors will get to proof-of-concept faster, and see greater efficiency within the market.

One example of such a platform is Apache Metron, an open source security platform that emerged from Cisco. Metron has been adopted by a number of major security providers and provides a glimpse of what the future of security should look like.

Collaborating, creating industry standards, or making technologies open source does not mean that vendors can't make money; in fact, the reverse is true. Customers will be more willing to invest in security solutions that they know are future-proofed, that don't come with the dreaded "vendor lock-in," and that simplify rather than further complicate their architecture.

Like all of security, there are varying degrees of risk and reward, but this approach is starting to look like the only logical future in an increasingly frothy, confusing, and low return-on-investment field. There will be a correction in the security market, whether it is in a month or a year. The fundamentals that will cause this are already evident, so there is an excellent opportunity to learn the lessons in advance and minimize the pain by contributing toward the platforms of the future.

Related Content: 10 Open Source Security Tools You Should Know Secure Code: You Are the Solution to Open Source's Biggest Problem The Good News about Cross-Domain Identity Management
Oh, No, Not Another Security Product

Learn from the industry's most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Click for more info .

Paul Stokes has spent the last decade launching, growing, and successfully exiting security and analytics technology companies. He was the co-founder and CEO of Cognevo, a market-leading security analytics software business that was acquired by Telstra Corporation. Prior to ...View Full Bio


          EVENT: Exposure Open Mic 8/9 Apache Cafe – ATL, GA      Cache   Translate Page   Web Page Cache   
none
          FTP login problem on webserver.      Cache   Translate Page   Web Page Cache   
Cannot login onto a Ubuntu / Nginx/ Ispconfig webserver, FTP account is made using ispconfig control panel, but ftp client cannot logon. Could be ispconfig installation problem. Can FTP onto ubuntu direct thou !!!! Error : - 331 Please specify the password... (Budget: $10 - $30 USD, Jobs: Apache, Debian, Linux, System Admin, Ubuntu)
          GitHub - alibaba/Sentinel: A lightweight flow-control library providing high-available protection and monitoring (高可用防护的流量管理框架)      Cache   Translate Page   Web Page Cache   

Sentinel Logo

Travis Build Status Maven Central License Gitter

What Does It Do?

As distributed systems become increasingly popular, the stability between services is becoming more important than ever before. Sentinel takes "flow" as breakthrough point, and works on multiple fields including flow control, concurrency, circuit breaking and load protection, to protect service stability.

Sentinel has the following features:

  • Rich applicable scenarios: Sentinel has been wildly used in Alibaba, and has covered almost all the core-scenarios in Double-11 Shopping Festivals in the past 10 years, such as “Second Kill” which needs to limit burst flow traffic to meet the system capacity, message peak clipping and valley fills, degrading unreliable downstream applications, etc.

  • Integrated monitor module: Sentinel also provides real-time monitoring function. You can see the runtime information of a single machine in real-time, and the summary runtime info of a cluster with less than 500 nodes.

  • Easy extension point: Sentinel provides easy-to-use extension points that allow you to quickly customize your logic, for example, custom rule management, adapting data sources, and so on.

Documentation

See the 中文文档 for Chinese readme.

See the Wiki for full documentation, examples, operational details and other information.

See the Javadoc for the API.

If you are using Sentinel, please leave a comment here to tell us your use scenario to make Sentinel better :-)

Quick Start

Below is a simple demo that guides new users to use Sentinel in just 3 steps. It also shows how to monitor this demo using the dashboard.

1.Download Library

Note: Sentinel requires Java 6 or later.

If your application is build in maven, just add the following code in pom.xml.

<dependency>
    <groupId>com.alibaba.csp</groupId>
    <artifactId>sentinel-core</artifactId>
    <version>x.y.z</version>
</dependency>

If not, you can download JAR in Maven Center Repository.

2.Define Resource

Wrap code snippet via Sentinel API: SphU.entry("RESOURCENAME") and entry.exit(). In below example, it is System.out.println("hello world");:

Entry entry = null;

try {   
  entry = SphU.entry("HelloWorld");
  
  // BIZ logic being protected
  System.out.println("hello world");
} catch (BlockException e) {
  // handle block logic
} finally {
  // make sure that the exit() logic is called
  if (entry != null) {
    entry.exit();
  }
}

So far the code modification is done.

3.Define Rules

If we want to limit the access times of the resource, we can define rules. The following code defines a rule that limits access to the reource to 20 times per second at the maximum.

List<FlowRule> rules = new ArrayList<FlowRule>();
FlowRule rule = new FlowRule();
rule.setResource("HelloWorld");
// set limit qps to 20
rule.setCount(20);
rule.setGrade(RuleConstant.FLOW_GRADE_QPS);
rules.add(rule);
FlowRuleManager.loadRules(rules);

4. Check the Result

After running the demo for a while, you can see the following records in ~/logs/csp/${appName}-metrics.log.xxx.

|--timestamp-|------date time----|--resource-|p |block|s |e|rt
1529998904000|2018-06-26 15:41:44|hello world|20|0    |20|0|0
1529998905000|2018-06-26 15:41:45|hello world|20|5579 |20|0|728
1529998906000|2018-06-26 15:41:46|hello world|20|15698|20|0|0
1529998907000|2018-06-26 15:41:47|hello world|20|19262|20|0|0
1529998908000|2018-06-26 15:41:48|hello world|20|19502|20|0|0
1529998909000|2018-06-26 15:41:49|hello world|20|18386|20|0|0

p stands for incoming request, block for intercepted by rules, success for success handled, e for exception, rt for average response time (ms)

This shows that the demo can print "hello world" 20 times per second.

More examples and information can be found in the How To Use section.

The working principles of Sentinel can be found in How it works section.

Samples can be found in the sentinel-demo module.

5.Start Dashboard

Sentinel also provides a simple dashboard application, on which you can monitor the clients and configure the rules in real time.

For details please refer to Dashboard.

Trouble Shooting and Logs

Sentinel will generate logs for troubleshooting. All the information can be found in logs.

Bugs and Feedback

For bug report, questions and discussions please submit GitHub Issues.

Contact us: sentinel@linux.alibaba.com

Contributing

Contributions are always welcomed! Please see CONTRIBUTING for detailed guidelines.


          发现CVE-2018-11512-wityCMS 0.6.1 持久型XSS      Cache   Translate Page   Web Page Cache   

Discovering CVE-2018-11512 - wityCMS 0.6.1 Persistent XSS

CMS(内容管理系统)很适合被用来做代码审计,尤其是现在CMS系统越来越流行,很多人愿意使用CMS搭建自己的项目。由于大部分CMS是一种开源项目,所以对于CMS的审计属于白盒测试,白盒测试让我们可以发现更多的安全漏洞,而且一旦我们发现了这些漏洞,由于其被广泛使用,所以它的漏洞的影响范围也是呈指数级增长的。这是因为通过白盒测试我们可以查看到程序的内部结构,从而更清楚的理解程序的工作原理。

WityCMS就是一个由creatiwiwiwiwiwity制作的CMS系统,它帮助管理不同用途的内容,如个人博客、商业网站或任何其他定制系统。在本文中,我将介绍如何设置CMS,查找web应用程序问题,以及如何复现CVE-2018-11512漏洞。

环境安装(windows下安装xampp)

  • 1.下载WityCMS0.6.1的源代码
  • 2.把/witycms-0.6.1 目录复制到C:\xampp\htdocs\ 下 或者是你自己安装xampp的的htdocs目录
  • 3.运行Apache和MySQL然后访问http://localhost/phpmyadmin/index.php.
  • 4.点击"databases"(中文版本的"数据库")
  • 5.创建一个名为"creatiwity_cms"的数据库

查找漏洞

因为这篇文章主要是关于CVE-2018-11512的,所以我今天就只找这个程序中的持久型XSS的洞,开始之前,我们先了解下什么是持久型XSS。

根据OWASP的介绍,"跨站脚本攻击(xss)是一种注入类型的攻击手段,它允许恶意web用户将代码植入到提供给其它用户使用的页面中"。这意味着只要一个网站上存在注入点,xss就可能被触发。目前有三种类型的XSS,但是本文我将讨论常见的XSS,即反射型XSS和持久型XSS。

当输入的数据被在发出请求后被返回给我们时,反射型XSS就会被触发。对于反射型XSS来说,网站的搜索功能可以作为一个测试反射型XSS的很好的例子。当用户在搜索框中输入一段payload后,该搜索功能可能会受到反射型XSS的影响。

另外,持久型XSS也被称为"存储型XSS"。这种类型的XSS值会被保存在系统中的某个数据库或是文件中。XSS的利用点通常存在于可以让用户随时更改的设置操作中,比如用户的个人信息页,可以设置用户的电子邮件,姓名,地址之类的地方。也可能存在于用户可以自己更改的某些系统设置中。

对于wityCMS,我的目标是找到可以在系统中保存数据的利用点。这基本上可以手工完成,也可以通过工具自动找到这些利用点。由于我已经在Windows中安装了它,所以我必须使用命令“findstr”而不是“grep”(抱歉,喜欢用"grep"的同学们)。可以在这里找到"findstr"的相关信息。

恶意代码的文件,我们可以使用以下命令:">要列出可以输入恶意代码的文件,我们可以使用以下命令:

/S = Recursive searching
/P = Skip files with non-printable characters
/I = Case insensitive
/N = Prints the line number
/c:<STR> = String to look for

代码:

findstr /SPIN /c:"<input" "c:\xampp\htdocs\witycms-0.6.1*.html"

命令行运行后的结果:

这个结果肯定很让人惊喜,因为可能存在XSS的地方太多了。登录到管理员面板后,我们可以轻松的在输入框中输入我们的payload。通过访问http://localhost/witycms-0.6.1/,我们可以看到一个很明显的值,如图所示:

我们安装这个CMS的时候设置了这个站点名称,它现在显示在主页上,不知道这个站点名称会不会存在持久型XSS,现在我们看看能不能在管理设置里修改这个值。

使用安装时设置的管理员账号密码登录到管理面板,登录后,管理面板中会有一个这样的小链接:

点击"Administration"后,网页会被重定向到我们安装时的执行设置操作的页面,第一个设置值也是网站名称。

插入一个非常简单的XSS代码试试:

script>alert(1)</script>

点击"save(保存)"后,返回值为:

可以注意到<script>和</script>标签被过滤了,因此我们可以知道该系统中存在一个防护机制,所以现在我们需要找到这个防护机制的运行原理。

当数据被保存到数据库中时,会处理一个请求。在这种情况下,我们应该能够识别请求方法是POST还是GET,在页面空白处右键单击"审查元素"查看源代码后,可以确认该方法是POST请求。

从这点来看,我们应该尝试找到POST请求发生的地方,这样顺下去我们就可以看到防护机制的运行点。因此,在cmd中输入以下命令:

findstr /SPIN /c:"$_POST" "c:\xampp\htdocs\witycms-0.6.1*.php"

这个命令类似于我们之前查找包含“input”标记的文件,但是这次,我们尝试在.php文件中查找引用"$_POST"的地方。

因为其他文件都与默认包含的库有关,这些都pass掉。所以命令的结果指向文WMain.hp,WRequest.php和WSession.php。浏览这些文件将我们发现在WRequest中有一个有趣的函数。如下所示,当防护机制发现脚本标示符时,这些标示符将被一个空字符串替换:

由于过滤器函数没有递归,所以过滤器只能拦截这样的输入:

所以输入这种内容是可以绕过过滤器的:

在我们设置站点名称的输入框中输入以下内容,我们将会得到以下结果:

一旦这个payload被设置为站点名称,访问网站的用户将会触发这个脚本,即使TA并没有经过身份验证。

这就开启了新世界的大门,因为当用户访问网站时会执行某些恶意脚本可能会造成比较严重的后果。比如可以将用户重定向到钓鱼站点,在用户不知情的情况下执行矿机脚本,或者其他很多操作。

处理CVE编号

由于这个bug容易引起安全问题,并且这个CMS正在被数以千计的人使用,所以我决定给这个程序申请一个CVE编号,以此来获得一个公开的CVE条目。

信息安全漏洞或者已经暴露出来的弱点给出一个公共的名称。cnas(cve-numbering-authorities)根据程序类型分别处理这些cve编号的漏洞。例如,如果联想设备中发现了安全问题,应该向联想的产品安全应急响应团队报告,在评估了漏洞后,他们将会给这个漏洞一个cve编号。">CVE 的英文全称是"Common Vulnerabilities & Exposures",CVE就好像是一个字典表,为广泛认同的计算机信息安全漏洞或者已经暴露出来的弱点给出一个公共的名称。CNAs(CVE Numbering Authorities)根据程序类型分别处理这些CVE编号的漏洞。例如,如果联想设备中发现了安全问题,应该向联想的产品安全应急响应团队报告,在评估了漏洞后,他们将会给这个漏洞一个CVE编号。

这说明,如果同样是在CNA公司的产品或项目中发现了漏洞,他们评估后可以直接给出一个CVE编号,在CNAs的CVE的漏洞列表中可以通过编号直接找到这个漏洞。而对于wityCMS, CreatiWity这两个产品,其创建者没有注册到CNA,所以我们可以向MITRE公司申请这个持久型XSS漏洞的CVE编号,下面是处理CVE漏洞事件的步骤:

  • 1.确认产品是否由CNA管理。如果由CNA管理,则报告该特定CNA的漏洞。如果不是,则报告给MITRE公司。
  • 2.通过google确认发现的漏洞是否已经分配了一个CVE编号。经常检查产品更新,以确认漏洞是否已经公开。
  • 3.对于wityCMS的情况,我使用了MITRE公司的CVE申请表单,可以在这里找到。
  • 4.在表格中填写所需的详细信息。关于wityCMS的这个漏洞,我是这样填的:
  • Vulnerability Type: Cross-Site Scripting
  • (漏洞类型:xss)
  • Product: wityCMS
  • (厂商:wityCMS)
  • Version: 0.6.1
  • (版本:0.6.1)
  • Vendor confirmed the vulnerability? No (Not acknowledged yet at the time - of request)
  • 厂商是否已确认该漏洞 没有 (漏洞提交时厂商未确认)
  • Attack Type: Remote
  • 攻击类型:远程
  • Impact: Code execution
  • (影响:代码执行)
  • Affected Components: Source code files showing “site_title” as output
  • 受影响的组件:输出"site_title"的源文件
  • Attack Vector: To exploit the vulnerability, one must craft and enter a script in the Site name field of the system
  • 攻击方式:必须在系统的站点名称字段中手工注入脚本
  • Suggested Description: Stored cross-site scripting (XSS) vulnerability in the "Website's name" field found in the "Settings" page under the "General" menu in Creatiwity wityCMS 0.6.1 allows remote attackers to inject arbitrary web script or HTML via a crafted website name by doing an authenticated POST HTTP request to admin/settings/general.
  • 漏洞详情:在creatiwitycms 0.6.1的“设置”菜单下的“网站名称”字段中存在存储型XSS漏洞,允许远程攻击者通过一个经过验证的POST HTTP请求向admin/ Settings / General注入任意的web脚本或HTML。
  • Discoverer: Nathu Nandwani
  • (发现者:Nathu Nandwani)
  • Reference(s): https://github.com/Creatiwity/wityCMS/issues/150, https://github.com/Creatiwity/wityCMS/co...229147de44
  • 参考

填写信息应该详细一点。为了让CVE处理的更快一些,描述中最好引用一些可以辅助理解漏洞的资料,并且详细地描述漏洞细节,如果可以,还应该写上漏洞可能有的修复方案。例如,在发送报告之前,我在这个项目的GitHub主页上发现了这个漏洞可能存在的点,因为有很多已经公开的关于存储型XSS的CVE漏洞,我找了其中的一个作为参考,然后通过这个漏洞想到了构造一个存储型XSS方法,并且注意到在这个GitHub项目中可能通过这个方法复现这个漏洞。

最后一点小贴士

  1. 如果细节已经公开,那么CVE号处理只需要一两天,所以最好先与开发人员或与项目相关的响应团队进行沟通,以便进行适当的修复。
  2. CVE漏洞的细节应该是准确的。更改发送给CNAs的报告的细节将减慢审核的速度,这意味着必须首先确认漏洞,不要浪费双方的时间。
  3. 更多关于CVE漏洞提交的细节可以在这里找到。
  4. VulDB提供漏洞公开服务。注册一个VulDB账号,你可以在那里提交一个条目。例如,这里是这个安全问题的VulDB条目。
  5. 也可以提交到exploit-db.com。这不仅显示出问题确实存在,而且还为CVE编号增加了可信的参考,因为安全团队尽其所能地测试验证漏洞是否存在。这里是一个exploit-db.com条目,请注意它目前正在等待验证。提交说明可以在这里找到

    我在这个wityCMS的一些版本中发现了其他持久型的XSS漏洞,但是我没有为它应用CVE编号。你能找到它们吗?期待听到您的意见或问题。(゜-゜)つロ 干杯~

    作者: nats</br>
    翻译:i春秋翻译小组-prison</br>
    翻译来源:https://greysec.net/showthread.php?tid=3202

感觉大佬们获取证书这么简单嗯! 双写绕过 学习一下~ 学习学习
          Java Developer - ALTA IT Services, LLC - Clarksburg, WV      Cache   Translate Page   Web Page Cache   
Experience with the following technologies – J2EE, Weblogic, Java, Javascript, JQuery, AngularJS, Apache, Linux, Subversion, and GitHub....
From ALTA IT Services, LLC - Tue, 12 Jun 2018 17:33:52 GMT - View all Clarksburg, WV jobs
          mod_passenger 5.3.4-1 x86_64      Cache   Translate Page   Web Page Cache   
Passenger apache module
          Rita Flaherty: Lockheed Builds F-35, Apache Aircraft Sensors in Orlando, Fla.      Cache   Translate Page   Web Page Cache   
Rita Flaherty, vice president of business development at Lockheed Martin, has said the company’s missiles and fire control business has designed and developed at least 11,000 sensor platforms through its Orlando, Fla.-based facility in support of military intelligence and surveillance missions, Orlando Business Journal reported Thursday. Those platforms include the Modernized Target Acquisition Designation Sight/Pilot Night […]
          goPanel 2.0.3 – Manage Web servers.      Cache   Translate Page   Web Page Cache   
goPanel is an incredibly intuitive OS X app for the management of web servers, an alternative to existing control-panel apps you install on Unix-based servers for web hosting. Easy-to-install and configure Apache or Nginx webserver, PHP, MySQL, FTP, domains, free SSL certs and emails on your server. goPanel lets you easily connect and manage unlimited […]
          Talend with Big Data - Kovan Technology Solutions - Houston, TX      Cache   Translate Page   Web Page Cache   
Hi, We are currently looking for Talend Developer with the below skills 1) Talend 2) XML, JSON 3) REST, SOAP 4) ACORD 5) Hadoop - HDFS, AWS EMR 6) Apache...
From Indeed - Fri, 27 Jul 2018 13:34:33 GMT - View all Houston, TX jobs
          Java Developer - ALTA IT Services, LLC - Clarksburg, WV      Cache   Translate Page   Web Page Cache   
Experience with the following technologies – J2EE, Weblogic, Java, Javascript, JQuery, AngularJS, Apache, Linux, Subversion, and GitHub....
From ALTA IT Services, LLC - Tue, 12 Jun 2018 17:33:52 GMT - View all Clarksburg, WV jobs
          How to Install Apache Maven on CentOS 7      Cache   Translate Page   Web Page Cache   
Apache Maven is a open source software project management and build automation tool, that is based on the conception of a project object model (POM), which is primarily used for deploying Java-based applications, but...
          How to Install PHP on Windows      Cache   Translate Page   Web Page Cache   

We've previously shown you how to get a working local installation of Apache on your Windows PC. In this article, we'll show how to install PHP 5 as an Apache 2.2 module.

Why PHP?

PHP remains the most widespread and popular server-side programming language on the web. It is installed by most web hosts, has a simple learning curve, close ties with the MySQL database, and an excellent collection of libraries to cut your development time. PHP may not be perfect, but it should certainly be considered for your next web application. Both Yahoo and Facebook use it with great success.

Why Install PHP Locally?

Installing PHP on your development PC allows you to safely create and test a web application without affecting the data or systems on your live website. This article describes PHP installation as a module within the Windows version of Apache 2.2. Mac and Linux users will probably have it installed already.

All-in-One packages

There are some excellent all-in-one Windows distributions that contain Apache, PHP, MySQL and other applications in a single installation file, e.g. XAMPP (including a Mac version), WampServer and Web.Developer. There is nothing wrong with using these packages, although manually installing Apache and PHP will help you learn more about the system and its configuration options.

The PHP Installer

Although an installer is available from php.net, I would recommend the manual installation if you already have a web server configured and running.

The post How to Install PHP on Windows appeared first on SitePoint.


          Mordedura de animales      Cache   Translate Page   Web Page Cache   

» Alergología » Mordedura de animales Mordedura de animales Mordedura de animales puede ocasionar la ruptura de la piel, un hematoma o una herida por punción. Consideraciones generales Si la mordedura es una herida punzante, tiene mayor probabilidad de infectarse. La rabia es una enfermedad poco común, pero potencialmente mortal transmitida por la saliva de animales afectados por dicha enfermedad. Si usted cree que un animal puede tener rabia, debe notificarlo a las autoridades correspondientes. Los ejemplos abarcan mapaches que están activos durante el día, una mascota extraviada, un animal que esté actuando de manera extraña o un animal que cause una mordedura sin haber sido provocado. Sea especialmente cauteloso con los murciélagos. Algunos médicos creen que cualquier contacto potencial con un murciélago, incluso simplemente ver uno en la casa, requiere la aplicación de una vacuna contra la rabia. No hay cura para la rabia una vez que se han desarrollado los síntomas, pero la vacuna oportuna después de la exposición a la enfermedad puede inmunizarlo a uno antes de que se desarrollen dichos síntomas. Si usted cree que puede haber estado expuesta a la rabia, debe hacerse vacunar inmediatamente. Algunos estudios muestran que en los casos en que una persona contrae la rabia por murciélagos, muchas de las víctimas ni siquiera sabían que habían sido mordidas por estos animales. Si usted ve un murciélago en la casa o tiene contacto con este animal de cualquier forma, debe acudir al médico de inmediato para solicitarle consejo. Muchas mordeduras de animales se deben tratar con antibióticos, incluso si no se requiere la vacuna antirrábica o suturas. Las mordeduras de animales en las manos o en los dedos especialmente justifican el uso de antibióticos. Si tiene alguna duda sobre la necesidad de un tratamiento, busque atención médica. Síntomas de Mordedura de animales...

La entrada Mordedura de animales se publicó primero en Clínica DAM.


          Java连接HBase(kerberized集群)      Cache   Translate Page   Web Page Cache   

社区原文 “Connecting to HBase in a Kerberos Enabled Cluster”

讲解如何通过 Java 或 Scala 在启用 Kerberos 的群集中连接到 HBase。

本测试需要一个启用了kerberos的HDP集群。集群搭建参考 《Ambari在本地VM(centos7.3)部署hadoop集群》 。本测试在HDP集群的C7302节点(centos7.3)上进行。首先,下载java样例代码:

$ cd /opt
$ git clone https://github.com/wbwangk/hdp-test-examples

这个github库是从 jjmeyer0/hdp-test-examples 库fork的。主要修改有:

修改了 pom.xml 文件:增加了对 HDP2.6.1 的支持;去掉了 Scala 相关依赖,因为会导致构建失败 修改了 src/main/java/com/jj/hbase/HBaseClient.java 中 jj 用户主体为 jj@AMBAR.APACHE.ORGI 创建keytab

在 c7302 节点用管理员账号登录 KDC,然后创建叫jj的主体,并导出 keytab:

$ kinit root/admin@AMBARI.APACHE.ORG
$ kadmin -q "addprinc jj" (创建jj主体,需要输入两次密码,密码是1)
$ ktutil
ktutil: addent -password -p jj -k 1 -e RC4-HMAC
Password for jj@AMBARI.APACHE.ORG: 1
ktutil: wkt jj.keytab (生成了keytab文件)
ktutil: q
$ scp jj.keytab /opt/hdp-test-examples/src/main/resources 准备HBase用户

jj 用户必须在 HBase 中获得正确的权限。Ambari 为 HBase创建一个管理员用户,通过 keytab 查找管理员用户主体。并利用它登录,利用密钥文件登录不需要密码:

$ klist -kt /etc/security/keytabs/hbase.headless.keytab (查看hbase服务的printcipal )
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
1 07/06/2017 03:53:35 hbase-hdp2610@AMBARI.APACHE.ORG
$ kinit -kt /etc/security/keytabs/hbase.headless.keytab hbase-hdp2610 (实测只能用这个主体登录,即使root/admin主体都不行)
$ hbase shell
hbase(main):001:0> grant 'jj','RW' 准备配置文件

运行例子需要的文件有三个:

hbase-site.xml .keytab krb5.conf 前文已经复制了jj.keytab,现在要复制另外两个。

由于使用HDP集群的节点充当客户机,所以直接在本节点复制文件即可:

$ scp /etc/hbase/conf/hbase-site.xml /opt/htp-test-examples/src/main/resources/
$ scp /etc/krb5.conf /opt/htp-test-examples/src/main/resources/

对于测试,建议在 hbase-site.xml 中更改 “hbase.client.retries.number” 属性。默认情况下为35。这个“重试次数”这在运行测试时太大了,复制后可以修改为3。

其它修改

目录 /opt/hdp-test-examples/src`下有两个目录:`main`和`test`。`main`目录放置客户端程序,而`test`目录是单元测试目录。 来到目录`/opt/hdp-test-examples/src/test/java/com/jj 下看看,发现除了hbase还有个pig目录。如果只是测试java客户端连接hbase,建议删除pig目录。否则在maven构建是也会执行pig的单元测试,而由于没有正确配置pig,导致必然出错使构建失败。

代码讲解

例子的 Java 代码位于 src/main/java/com/jj/hbase/HBaseClient.java 。在代码中,首先需要做的是创建和加载 HBase 配置:

// Setting up the HBase configuration
Configuration configuration = new Configuration();
configuration.addResource("src/main/resources/hbase-site.xml");

接下来指向 krb5.conf 文件并设置 Kerberos 主体和 keytab。

// Point to the krb5.conf file.
System.setProperty("java.security.krb5.conf", "src/main/resources/krb5.conf");
System.setProperty("sun.security.krb5.debug", "true");
// Override these values by setting -DkerberosPrincipal and/or -DkerberosKeytab
String principal = System.getProperty("kerberosPrincipal", "jj@AMBARI.APACHE.ORG");
String keytabLocation = System.getProperty("kerberosKeytab", "src/main/resources/jj.keytab");

现在使用上面定义的主键和 keytab 登录。

UserGroupInformation.setConfiguration(configuration);
UserGroupInformation.loginUserFromKeytab(principal, keytabLocation) Maven构建、测试 $ cd /opt/hdp-test-examples
$ mvn clean test -P hdp-2.6.1 (如果网络差则耗时较长)
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building hdp-test-examples 1.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hdp-test-examples ---
[INFO] Deleting /opt/hdp-test-examples/target
[INFO]
[INFO] --- maven-resources-plugin:2.5:resources (default-resources) @ hdp-test-examples ---
[debug] execute contextualize
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 10 resources
[INFO]
[INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) @ hdp-test-examples ---
[INFO] Compiling 5 source files to /opt/hdp-test-examples/target/classes
[INFO]
[INFO] --- maven-resources-plugin:2.5:testResources (default-testResources) @ hdp-test-examples ---
[debug] execute contextualize
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO]
[INFO] --- maven-compiler-plugin:2.3.2:testCompile (default-testCompile) @ hdp-test-examples ---
[INFO] Compiling 1 source file to /opt/hdp-test-examples/target/test-classes
[INFO]
[INFO] --- maven-surefire-plugin:2.10:test (default-test) @ hdp-test-examples ---
[INFO] Surefire report directory: /opt/hdp-test-examples/target/surefire-reports
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running com.jj.hbase.HBaseClientTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.552 sec
Results :
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 5.145s
[INFO] Finished at: Wed Jul 19 07:19:34 UTC 2017
[INFO] Final Memory: 38M/91M
[INFO] ------------------------------------------------------------------------

可以自己读一下单元测试代码 /opt/hdp-test-examples/src/test/java/com/jj/hbase/HBaseClientTest.java 。看上去,代码中它似乎连接上 HBase,然后建表并插入几行数据。

碰到的问题 虚拟机内存不足,将内存由 3G 改成 4G 后问题解决; 构建过程中一些 jar 包下载失败,修改 pom.xml,去掉 Scala相关依赖后问题解决; pig 测试失败,删除 pig 的单元测试目录; 通过 HBase shell 无法进行 grant,改用 hbase-hdp2610 主体并加大虚拟机内存后解决。

这里是 完整代码 。

windows下的测试

前文是在 Centos7.3下进行的测试。下面在 Windows下进行测试。毕竟很多人使用 Windows+Eclipse 进行开发。下面的测试并没有直接使用 Eclipse,而是更直接的命令行测试。希望有人能够补充上 Eclipse 下的测试。关于 Eclipse 下的相关配置可以参考 hortonworks 的一篇 社区文章(“Hortonworks Data Platform Artifacts”) 。

测试使用了git bash命令行工具。git base在 Windows 下模拟的类似 linux 的命令,但实际上使用的 Windows 操作系统文件。关于 git base 的安装使用参考 这个文档《Ambari 在本地 VM 部署 Hadoop 集群》 。在 git base 上测试通过后,之后又直接在 Windows命令行下进行了测试。需要说明的是,git bash 和 Windows使用了不同的环境变量,如PATH。

在 Windows下需要安装 JDK1.8 和 Maven。Maven是 Java 实现的,所以是所有平台通用的。在 Maven 的 这篇文档(“Maven on Windows”) 中要求 JDK 的安装目录名称不要有空格(如 Program Files 就不行)。Maven被我安装在了 e:\maven 。在 git bash 下运行 Maven 的方法是 /e/maven/bin/mvn 。

准备代码和配置文件

测试在 Windows的 e:\opt 目录下进行。以下操作在 git bash 窗口中进行:

$ cd /e/opt
$ git clone https://github.com/wbwangk/hdp-test-examples
$ cd hdp-test-examples
$ scp root@c7302:/etc/krb5.conf src/main/resources/
$ scp root@c7302:/etc/hbase/conf/hbase-site.xml src/main/resources/
$ scp root@c7302:/opt/hdp-test-examples/src/main/resources/jj.keytab src/main/resources/

上述三个 scp 操作时把测试用到3个配置文件从 Linux 下网络复制到了 Windows下。确保 Windows的 hosts 文件中定义了3台虚拟机的 IP 和域名。

执行构建和单元测试 $ /e/maven/bin/mvn clean test
          Bill Ward / AdminTome: Data Pipeline: Send logs from Kafka to Cassandra      Cache   Translate Page   Web Page Cache   

Bill Ward / AdminTome: Data Pipeline: Send logs from Kafka to Cassandra

In this post, I will outline how I created a big data pipeline for my web server logs using Apache Kafka, python, and Apache Cassandra.

In past articles I described how to install and configureApache Kafka andApache Cassandra. I assume that you already have a Kafka broker running with a topic of www_logs and a production ready Cassandra cluster running. If you don’t then please follow the articles mentioned in order to follow along with this tutorial.

In this post, we will tie them together to create a big data pipeline that will take web server logs and push them to an Apache Cassandra based data sink.

This will give us the opportunity to go through our logs using SQL statements and possible other benefits like applying machine learning to predict if there is an issue with our site.

Here is the basic diagram of what we are going to configure:


Bill Ward / AdminTome: Data Pipeline: Send logs from Kafka to Cassandra

Lets see how we start the pipeline by pushing log data to our Kafka topic.

Pushing logs to our data pipeline

Apache Web Server logs to /var/logs/apache. For this tutorial, we will work with the Apache access logs which show requests to the web server. Here is an example:

108.162.245.143 - - [08/Aug/2018:17:44:40 +0000] "GET /blog/terraform-taint-tip/ HTTP/1.0" 200 31281 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"

Log files are simply text files where each line is a entry in the log file.

In order to easily read our logs from a Python application that we will write later, we will want to convert these log lines into JSON data and add a few more fields.

Here is what our JSON will look like:

{
"log": {
"source": "",
"type": "",
"datetime": "",
"log": ""
}
}

The source field is going to be the hostname of our web server. The type field is going to let us know what type of logs we are sending. In this case it will be ‘www_access’ since we are going to send Apache access logs. The datetime field will hold the timestamp value of when the log was created. Finally, the log field will contain the entire line of text representing the log entry.

I created a sample python application that takes these logs and forwards them to kafka. You can find it on GitHub at admintome/logs2kafka . Let’s look at the forwarder.py file in more detail:

import time
import datetime
import socket
import json
from mykafka import MyKafka
def parse_log_line(line):
strptime = datetime.datetime.strptime
hostname = socket.gethostname()
time = line.split(' ')[3][1::]
entry = {}
entry['datetime'] = strptime(
time, "%d/%b/%Y:%H:%M:%S").strftime("%Y-%m-%d %H:%M")
entry['source'] = "{}".format(hostname)
entry['type'] = "www_access"
entry['log'] = "'{}'".format(line.rstrip())
return entry
def show_entry(entry):
temp = ",".join([
entry['datetime'],
entry['source'],
entry['type'],
entry['log']
])
log_entry = {'log': entry}
temp = json.dumps(log_entry)
print("{}".format(temp))
return temp
def follow(syslog_file):
syslog_file.seek(0, 2)
pubsub = MyKafka(["mslave2.admintome.lab:31000"])
while True:
line = syslog_file.readline()
if not line:
time.sleep(0.1)
continue
else:
entry = parse_log_line(line)
if not entry:
continue
json_entry = show_entry(entry)
pubsub.send_page_data(json_entry, 'www_logs')
f = open("/var/log/apache2/access.log", "rt")
follow(f)

The first thing we do is open the log file /var/log/apache2/access.log for reading. We then pass that file to our follow () function where our application will follow the log file much like tail -f /var/log/apache2/access.log would.

If the follow function detects that a new line exists in the log it converts it to JSON using the parse_log_line () function. It then uses the send_page_data() function of MyKafka to push the JSON message to the www_logs topic.

Here is the MyKafka.py python file:

from kafka import KafkaProducer
import json
class MyKafka(object):
def __init__(self, kafka_brokers):
self.producer = KafkaProducer(
value_serializer=lambda v: json.dumps(v).encode('utf-8'),
bootstrap_servers=kafka_brokers
)
def send_page_data(self, json_data, topic):
result = self.producer.send(topic, key=b'log', value=json_data)
print("kafka send result: {}".format(result.get()))

This simply calls KafkaProducer to send our JSON as a key/value pair where the key is the string ‘log’ and the value is our JSON.

Now that we have our log data being pushed to Kafka we need to write a consumer in python to pull messages off the topic and save them as a row in a Cassandra table.

But first we should prepare Cassandra by creating a Keyspace and a table to hold our log data.

Preparing Cassandra

In order to save our data to Cassandra we need to first create a Keyspace in our Cassandra cluster. Remember that a keyspace is how we tell Cassandra a replication strategy for any tables attached to our keyspace.

Let’s start up CQLSH.

$ bin/cqlsh cass1.admintome.lab
Connected to AdminTome Cluster at cass1.admintome.lab:9042.
[cqlsh 5.0.1 | Cassandra 3.11.3 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh>

Now run the following query to create our keyspace.

CREATE KEYSPACE admintome WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'} AND durable_writes = true;

Now run this query to create our logs table.

CREATE TABLE admintome.logs (
log_source text,
log_type text,
log_id timeuuid,
log text,
log_datetime text,
PRIMARY KEY ((log_source, log_type), log_id)
) WITH CLUSTERING ORDER BY (log_id DESC)

Essentially, we are storing time series data which represents our log file information.

You can see that we have a column for source, type, datetime, and log that match our JSON from the previous section.

We also have another row called log_id that is of the type timeuuid. This creates a unique UUID from the current timestamp when we insert a record into this table.

Cassandra stores one row per partition. A partition in Cassandra is identified by the PRIMARY KEY. In this example, our PK is a COMPOSITE PRIMARY KEY where we use both the log_source and the log_type values as a primary key.

So for our example, we are going to create a single partition in Cassandra consisting of the primary key (‘www2’,’www_access). The hostname of my web server is www2 so that is what log_source is set to.

We also set the Clustering Key to log_id . These are guaranteed unique keys so we will be able to have multiple rows in our partition.

If I lost you there don’t worry, it took me a couple of days and many headaches to understand it fully. I will be writing another article soon detailing why the data is modeled in this fashion for Cassandra.

Now that we have our Cassandra keyspace and table ready to go, we need to write our Python consumer to pull the JSON data from our Kafka topic and insert that data into our table as a new row.

Python Consumer Application I have posted the source code to the
          HBase 高性能随机查询之道:HFile 原理解析      Cache   Translate Page   Web Page Cache   

在各色数据库系统百花齐放的今天,能让大家铭记的,往往是一个数据库所能带给大家的差异化能力。正如梁宁老师的产品思维课程中所讲到的,这是一个数据库系统所能带给产品使用者的”确定性”。

差异化能力通常需要从数据库底层开始构筑,而数据存储方式显得至关重要,因为它直接关乎数据写入与读取的效率。在一个系统中,这两方面的能力需要进行很好的权衡:如果设计有利于数据的快速写入,可能意味着查询时需要需要花费较大的精力去组织数据,反之,如果写入时花费精力去更好的组织数据,查询就会变的非常轻松。

探讨数据库的数据存储方式,其实就是探讨数据如何在磁盘上进行有效的组织。因为我们通常以如何高效读取和消费数据为目的,而不是数据存储本身。在RDBMS领域,因为键与数据的组织方式的区别,有两种表组织结构最为常见,一种是键与数据联合存储的 索引组织表 结构,在这种表结构下,查到键值意味着查找到数据;另外一种是键与数据分离存储的 堆表 结构。在这种表结构下,查找到键以后,只是拿到了数据记录的物理地址,还需要基于该物理地址去查找具体的数据记录。

在大数据分析领域,有几种通用的文件格式,如Parquet, RCFile, ORCFile,CarbonData等等,这些文件大多基于列式的设计结构,来加速通用的分析型查询。但在实时数据库领域,却以各种私有的文件格式最为常见,如Bigtable的SSTable,HBase的HFile,Kudu的DiskRowSets,Cassandra的变种SSTable,MongoDB支持的每一种Storage Engine都是私有的文件格式设计,等等。

本文将详细探讨HBase的HFile设计,第一部分为HFile原理概述,第二部分介绍了一个HFile从无到有的生成过程,最后部分列出了几点与HFile有关的附加信息。

华为云上的HBase服务:

点击本文末尾处的" 阅读原文 "链接,可了解 华为云 上的 全托管式HBase服务CloudTable , 集成了 时序数据库 OpenTSDB 与 时空数据库 GeoMesa ,目前 已正式商用 。

本系列文章

开篇内容

介绍HBase的数据模型、适用场景、集群关键角色、建表流程以及所涉及的HBase基础概念。 Writer全流程 介绍了写数据的接口,RowKey定义,数据在客户端的组装,数据路由,打包分发,以及RegionServer侧将数据写入到Region中的全部流程。 Flush与Compaction 阐述了Flush与Compaction流程,讲述了Compaction所面临的本质问题,介绍了HBase现有的几种Compaction策略以及各自的适用场景。

Read全流程

首先介绍了HBase的两种读取模式(Get与Scan),而后详细介绍了Scan的详细实现流程。

HFile原理概述

最初的HFile格式(HFile V1),参考了Bigtable的SSTable以及Hadoop的TFile(HADOOP-3315)。如下图所示:


HBase 高性能随机查询之道:HFile 原理解析

HFile在生成之前,数据在内存中已经是按序组织的。存放用户数据的KeyValue,被存储在一个个默认为64kb大小的Data Block中,在Data Index部分存储了每一个Data Block的索引信息{Offset,Size,FirstKey},而Data Index的索引信息{Data Index Offset, Data Block Count}被存储在HFile的Trailer部分。除此以外,在Meta Block部分还存储了Bloom Filter的数据。下图更直观的表达出了HFile V1中的数据组织结构:


HBase 高性能随机查询之道:HFile 原理解析

这种设计简单、直观。但用过0.90或更老版本的同学,对于这个HFile版本所存在的问题应该深有痛楚:Region Open的时候,需要加载所有的Data Block Index数据,另外,第一次读取时需要加载所有的Bloom Filter数据到内存中。一个HFile中的Bloom Filter的数据大小可达百MB级别,一个RegionServer启动时可能需要加载数GB的Data Block Index数据。这在一个大数据量的集群中,几乎无法忍受。

Data Block Index究竟有多大?

一个Data Block在Data Block Index中的索引信息包含{Offset, Size, FirstKey},BlockOffset使用Long型数字表示,Size使用Int表示即可。假设用户数据RowKey的长度为50bytes,那么,一个64KB的Data Block在Data Block Index中的一条索引数据大小约为62字节。

假设一个RegionServer中有500个Region,每一个Region的数量为10GB(假设这是Data Blocks的总大小),在这个RegionServer上,约有81920000个Data Blocks,此时,Data Block Index所占用的大小为81920000*62bytes,约为4.7GB。

这是HFile V2设计的初衷,HFile V2期望显著降低RegionServer启动时加载HFile的时延,更希望解决一次全量加载数百MB级别的BloomFilter数据带来的时延过大的问题。下图是HFile V2的数据组织结构:


HBase 高性能随机查询之道:HFile 原理解析

较之HFile V1,我们来看看HFile V2的几点显著变化:

1.分层索引

无论是Data Block Index还是Bloom Filter,都采用了 分层索引 的设计。

Data Block的索引,在HFile V2中做多可支持三层索引:最底层的Data Block Index称之为Leaf Index Block,可直接索引到Data Block;中间层称之为Intermediate Index Block,最上层称之为Root Data Index,Root Data index存放在一个称之为" Load-on-open Section "区域,Region Open时会被加载到内存中。基本的索引逻辑为:由Root Data Index索引到Intermediate Block Index,再由Intermediate Block Index索引到Leaf Index Block,最后由Leaf Index Block查找到对应的Data Block。在实际场景中, Intermediate Block Index基本上不会存在 ,文末部分会通过详细的计算阐述它基本不存在的原因,因此,索引逻辑被简化为:由Root Data Index直接索引到Leaf Index Block,再由Leaf Index Block查找到的对应的Data Block。

Bloom Filter也被拆成了多个Bloom Block,在"Load-on-open Section"区域中,同样存放了所有Bloom Block的索引数据。

2.交叉存放

在" Scanned Block Section "区域,Data Block(存放用户数据KeyValue)、存放Data Block索引的Leaf Index Block(存放Data Block的索引)与Bloom Block(Bloom Filter数据)交叉存在。

3.按需读取

无论是Data Block的索引数据,还是Bloom Filter数据,都被拆成了多个Block,基于这样的设计,无论是索引数据,还是Bloom Filter,都可以 按需读取 ,避免在Region Open阶段或读取阶段一次读入大量的数据, 有效降低时延 。

从0.98版本开始,社区引入了HFile V3版本,主要是为了支持Tag特性,在HFile V2基础上只做了微量改动。在下文内容中,主要围绕 HFile V2 的设计展开。

HFile如何生成

在本章节,我们以Flush流程为例,介绍如何一步步生成HFile的流程,来加深大家对于HFile原理的理解。

起初,HFile中并没有任何Block,数据还存在于MemStore中。

Flush发生时,创建HFile Writer,第一个空的Data Block出现,初始化后的Data Block中为Header部分预留了空间,Header部分用来存放一个Data Block的元数据信息。

而后,位于MemStore中的KeyValues被一个个append到位于内存中的第一个Data Block中:


HBase 高性能随机查询之道:HFile 原理解析

注:如果配置了Data Block Encoding,则会在Append KeyValue的时候进行同步编码,编码后的数据不再是单纯的KeyValue模式。Data Block Encoding是HBase为了降低KeyValue结构性膨胀而提供的内部编码机制。上图中所体现出来的KeyValue,只是为了方便大家理解。

当Data Block增长到预设大小(默认64KB)后,一个Data Block被停止写入,该Data Block将经历如下一系列处理流程:

1. 如果有配置启用压缩或加密特性,对Data Block的数据按相应的算法进行压缩和加密。


HBase 高性能随机查询之道:HFile 原理解析

2. 在预留的Header区,写入该Data Block的元数据信息,包含{压缩前的大小,压缩后的大小,上一个Block的偏移信息,Checksum元数据信息}等信息,下图是一个Header的完整结构:


HBase 高性能随机查询之道:HFile 原理解析

3. 生成Checksum信息。


HBase 高性能随机查询之道:HFile 原理解析

4. Data Block以及Checksum信息通过HFile Writer中的 输出流 写入到HDFS中。

5. 为输出的Data Block生成一条 索引记录 ,包含这个Data Block的{起始Key,偏移,大小}信息,这条索引记录被暂时记录到内存的Block Index Chunk中:


HBase 高性能随机查询之道:HFile 原理解析

注:上图中的firstKey并不一定是这个Data Block的第一个Key,有可能是上一个Data Block的最后一个Key与这一个Data Block的第一个Key之间的一个中间值。具体可参考附录部分的信息。

至此, 已经写入了第一个Data Block ,并且在Block Index Chunk中记录了关于这个Data Block的一条索引记录。

随着Data Blocks数量的不断增多, Block Index Chunk 中的记录数量也在不断变多。当Block Index Chunk达到一定大小以后(默认为128KB),Block Index Chunk也经与Data Block的类似处理流程后输出到HDFS中,形成第一个 Leaf Index Block :


HBase 高性能随机查询之道:HFile 原理解析

此时,已输出的 Scanned Block Section 部分的构成如下:


HBase 高性能随机查询之道:HFile 原理解析

正是因为Leaf Index Block与Data Block在Scanned Block Section交叉存在,Leaf Index Block被称之为 Inline Block (Bloom Block也属于Inline Block)。在内存中还有一个 Root Block Index Chunk 用来记录每一个Leaf Index Block的 索引信息 :


HBase 高性能随机查询之道:HFile 原理解析

从Root Index到Leaf Data Block再到Data Block的索引关系如下:


HBase 高性能随机查询之道:HFile 原理解析

我们先假设没有Bloom Filter数据。当MemStore中所有的KeyValues全部写完以后,HFile Writer开始在close方法中处理最后的”收尾”工作:

1. 写入最后一个Data Block。

2. 写入最后一个Leaf Index Block。

如上属于 Scanned Block Section 部分的”收尾”工作。

3. 如果有MetaData则写入位于 Non-Scanned Block Section 区域的Meta Blocks,事实上这部分为空。

4. 写Root Block Index Chunk部分数据:

如果Root Block Index Chunk超出了预设大小,则输出位于 Non-Scanned Block Section 区域的Intermediate Index Block数据,以及生成并输出Root Index Block(记录Intermediate Index Block索引)到 Load-On-Open Section 部分。

如果未超出大小,则直接输出为 Load-On-Open Section 部分的Root Index Block。

5. 写入用来索引Meta Blocks的Meta Index数据(事实上这部分只是写入一个空的Block)。

6. 写入FileInfo信息,FileInfo中包含:

Max SequenceID, MajorCompaction标记,TimeRanage信息,最早的Timestamp, Data BlockEncoding类型,BloomFilter配置,最大的Timestamp,KeyValue版本,最后一个RowKey,平均的Key长度,平均Value长度,Key比较器等。

7. 写入Bloom Filter元数据与索引数据。

注:前面每一部分信息的写入,都以Block形式写入,都包含Header与Data两部分,Header中的结构也是相同的,只是都有不同的Block Type,在Data部分,每一种类型的Block可以有自己的定义。

8. 写入Trailer部分信息, Trailer中包含:

Root Index Block的Offset,FileInfo部分Offset,Data Block Index的层级,Data Block Index数据总大小,第一个Data Block的Offset,最后一个Data Block的Offset,Comparator信息,Root Index Block的Entries数量,加密算法类型,Meta Index Block的Entries数量,整个HFile文件未压缩大小,整个HFile中所包含的KeyValue总个数,压缩算法类型等。

至此, 一个完整的HFile已生成。 我们可以通过下图再简单回顾一下Root Index Block、Leaf Index Block、Data Block所处的位置以及索引关系:


HBase 高性能随机查询之道:HFile 原理解析

简单起见,上文中刻意忽略了Bloom Filter部分。Bloom Filter被用来快速判断一条记录是否在一个大的集合中存在,采用了多个Hash函数+位图的设计。写入数据时,一个记录经X个Hash函数运算后,被映射到位图中的X个位置,将位图中的这X个位置写为1。判断一条记录是否存在时,也是通过这个X个Hash函数计算后,获得X个位置,如果位图中的这X个位置都为1,则表明该记录"可能存在",但如果至少有一个为0,则该记录"一定不存在"。详细信息,大家可以直接参考Wiki,这里不做过多展开。

Bloom Filter包含 Bloom元数据(Hash函数类型,Hash函数个数等) 与 位图数据 ( BloomData ),为了避免每一次读取时加载所有的Bloom Data,HFile V2中将BloomData部分分成了多个小的 Bloom Block 。BloomData数据也被当成一类 Inline Block ,与Data Block、Leaf Index Block交叉存在,而关于Bloom Filter的 元数据 与多个Bloom Block的 索引 信息,被存放在 Load-On-Open Section 部分。但需要注意的是,在 FileInfo 部分,保存了关于BloomFilter配置类型信息,共包含三种类型:不启用,基于Row构建BloomFilter,基于Row+Column构建Bloom Filter。混合了BloomFilter Block以后的HFile构成如下图所示:


HBase 高性能随机查询之道:HFile 原理解析
附录1 多大的HFile文件才存在Intermiate Index Block

每一个Leaf Index Block大小的计算方法如下( HFileBlockIndex$BlockIndexChunk#getNonRootSize ):


HBase 高性能随机查询之道:HFile 原理解析

curTotalNonRootEntrySize 是在每次写入一个新的Entry的时候累加的:


HBase 高性能随机查询之道:HFile 原理解析

这样可以看出来,每一次新增一个Entry,则累计的值为:

12 + firstKey.length

假设一个Leaf Index Block可以容纳的Data Block的数量为x:

4 + 4 * (x + 1) + x * (12 + firstKey.length)

进一步假设,firstKey.length为50bytes。而一个Leaf Index Block的默认最大大小为128KB:

4 + 4 * (x + 1) + x * (12 + 50) = 128 * 1024

x ≈ 1986

也就是说,在假设firstKey.length为50Bytes时,一个128KB的Leaf Index Block所能容纳的Data Block数量约为 1986 个。

我们再来看看Root Index Chunk大小的计算方法:


HBase 高性能随机查询之道:HFile 原理解析

基于firstKey为50 Bytes的假设,每往Root Index Chunk中新增一个Entry(关联一个Leaf Index Block),那么,curTotalRootSize的累加值为:

12 + 1 + 50 = 63

因此,一个128KB的Root Index Chunk可以至少存储2080个Entries,即可存储2080个Leaf Index Block。

这样, 一个Root Index Chunk所关联的Data Blocks的总量应该为:

1986 * 2080 = 4,130,880

而每一个Data Block默认大小为64KB,那么,这个HFile的总大小至少为:

4,130,880 * 64 * 1024 ≈ 252 GB

即,基于每一个Block中的FirstKey为50bytes的假设,一个128KB的Root Index Block可容纳的HFile文件总大小约为252GB。

如果实际的RowKey小于50 Bytes,或者将Data Block的Size调大,一个128KB的Root Index Chunk所关联的HFile文件将会更大。因此,在大多数场景中,Intermediate Index Block并不会存在。

附录2 关于HFile数据查看工具

HBase中提供了一个名为HFilePrettyPrinter的工具,可以以一种直观的方式查看HFile中的数据,关于该工具的帮助信息,可通过如下命令查看:

hbase org.apache.hadoop.hbase.io.hfile.HFile

References

HBase Architecture 101 Storage

HBASE-3857: Change the HFile Format

HBase Document: Appendix H: HFile format

HADOOP-3315: New Binary file format

SSTable and Log Structured Storage: LevelDB

点击" 阅读原文 "链接,可了解 华为云 上的 全托管式HBase服务CloudTable , 集成了 时序数据库 OpenTSDB 与 时空数据库 GeoMesa ,目前 已正式商用 。

关于"NoSQL漫谈"

NoSQL主要泛指一些分布式的非关系型数据存储技术,这其实是一个非常广泛的定义,可以说涉及到分布式系统技术的方方面面。随着人工智能、物联网、大数据、云计算以及区块链技术的不断普及,NoSQL技术将会发挥越来越大的价值。 请长按下面的二维码关注我们 更多NoSQL技术分享,敬请期待!


          Marvel quiere de regreso a James Gunn y buscará convencer a Disney      Cache   Translate Page   Web Page Cache   
james gunn guardianes de la galaxia
Marvel Studios quiere de regreso a James Gunn como director para Guardianes de la Galaxia Vol. 3 y está intentando convencer a Disney.
          GSOC 2018 – Improved InterMine Search with Solr      Cache   Translate Page   Web Page Cache   
Currently InterMine uses Apache Lucene (v3.0.2) library to index the data and provide a key-word style search over all data. The goal of this project is to introduce Apache Solr in InterMine so that indexing and searching can happen even quicker. Unlike Lucene which is a library, Apache Solr is a separate server application which is similar … Continue reading GSOC 2018 – Improved InterMine Search with Solr
          Telecommute Commercial and Government Developer      Cache   Translate Page   Web Page Cache   
A cybersecurity company is searching for a person to fill their position for a Telecommute Commercial and Government Developer. Must be able to: Use Java/J2EE, Ruby on Rails, Python, Ruby, Node.js, C/C++, Assembly, UNIX/Linux, Apache, and cross platform mobile app development technologies Follow Scrum/Kanban methodologies Build systems that support applications to be delivered on multiple end point devices Required Skills: Demonstrated on-the-job experience developing with HTML5, PHP, JavaScript, and jQuery Demonstrated on-the-job experience with Ruby on Rails, Node.js, or AngularJS Demonstrated on-the-job experience with Python, Node.js, C/C++, Assembly Demonstrated on-the-job experience with REST services Demonstrated on-the-job experience working in Agile teams
          Comment on Kanye West Says He Refuses To Be A Prisoner To ‘Monolithic Thought’ by CherokeeApache Irish      Cache   Translate Page   Web Page Cache   
Lying Jimmy Kimmel trying to do a gotcha question at the end of the video on top of immigration at the border by spouting a complete lie. Kanye needs to get with Candace Owens more and get educated on the issues more so he doesn't get caught off guard by these lying TV hosts.
          Comment on REBUTTAL: Vox Wants Censorship of “Right Wing” YouTubers by CherokeeApache Irish      Cache   Translate Page   Web Page Cache   
All the leftists are complete hypocrites when it comes to who advocates violence, bigotry and censorship against political opposition.
          Comment on REBUTTAL: Vox Wants Censorship of “Right Wing” YouTubers by CherokeeApache Irish      Cache   Translate Page   Web Page Cache   
VOX male looks to be mentally, physically and spiritually weak.
          Getting Started with Apache Kafka and Kubernetes      Cache   Translate Page   Web Page Cache   

Enabling everyone to run Apache Kafka® on Kubernetes is an important part of our mission to put a streaming platform at the heart of every company. This is why we look […]

The post Getting Started with Apache Kafka and Kubernetes appeared first on Confluent.


          Problems in Plesk      Cache   Translate Page   Web Page Cache   
I have a vps webserver with Plesk, that recovered from a backup after suddenly not working properly anymore. But with putting the backup back it's still not 100% functioning right. Some small things need to be fixed... (Budget: €8 - €30 EUR, Jobs: Apache, Linux, Magento, MySQL, PHP)
          How to Install PHP on Windows      Cache   Translate Page   Web Page Cache   

We've previously shown you how to get a working local installation of Apache on your Windows PC. In this article, we'll show how to install PHP 5 as an Apache 2.2 module.

Why PHP?

PHP remains the most widespread and popular server-side programming language on the web. It is installed by most web hosts, has a simple learning curve, close ties with the MySQL database, and an excellent collection of libraries to cut your development time. PHP may not be perfect, but it should certainly be considered for your next web application. Both Yahoo and Facebook use it with great success.

Why Install PHP Locally?

Installing PHP on your development PC allows you to safely create and test a web application without affecting the data or systems on your live website. This article describes PHP installation as a module within the Windows version of Apache 2.2. Mac and Linux users will probably have it installed already.

All-in-One packages

There are some excellent all-in-one Windows distributions that contain Apache, PHP, MySQL and other applications in a single installation file, e.g. XAMPP (including a Mac version), WampServer and Web.Developer. There is nothing wrong with using these packages, although manually installing Apache and PHP will help you learn more about the system and its configuration options.

The PHP Installer

Although an installer is available from php.net, I would recommend the manual installation if you already have a web server configured and running.

The post How to Install PHP on Windows appeared first on SitePoint.


          DevOps Engineer - ROZEE.PK - Lahore      Cache   Translate Page   Web Page Cache   
MySQL database administration. Linux (Ubuntu, CentOS) and FreeBSD administration. Installation, administration and securing web servers e.g Apache, Nginx etc....
From Rozee - Mon, 06 Aug 2018 10:44:23 GMT - View all Lahore jobs
          TV Good Sleep Bad Podcast: ‘Fringe’ & ‘Apaches’      Cache   Translate Page   Web Page Cache   

Elwood and Lackey return with another double dose of cult TV goodness, this time kicking off with Fringe, a show dismissed by some as an X Files cash in, the guys argue why this show was so much more. Rounding out this double bill is the 1977 public information film Apaches by John Mackenzie, who would go on […]

The post TV Good Sleep Bad Podcast: ‘Fringe’ & ‘Apaches’ appeared first on That Moment In.


          Kommentar zu Emojis: Neue Kandidaten für Emoji 12 gesichtet von Gerald      Cache   Translate Page   Web Page Cache   
Na toll, wir haben jetzt circa 500 verschiedene Variationen von Gesichtern in jeweils 300 verschiedenen Schattierungen von Hautfarbe und Herkunft. Dazu jetzt noch 300 Variationen von diversen Möglichkeiten wie verschiedene Männlein/Weiblein/ApacheAttackhelikopter-Geschlechter Ihre Händchen halten. Aber das Emoji Auto gibt es in genau EINER Farbe. Welche noch nicht mal über die Platformen hinweg identisch ist. Na GANZ TOLL gemacht liebe Social Justice Warrior. Ich werde wohl eine Autofahrer Partei gründen müssen um dieser himmelschreienden Ungerechtigkeit ein Ende bereits zu können?
          Open Source at Indeed: Sponsoring the Apache Software Foundation      Cache   Translate Page   Web Page Cache   

As Indeed continues to grow our commitment to the open source community, we are pleased to announce our sponsorship of the Apache Software Foundation. Earlier this year, we joined the Cloud Native Computing Foundation and began sponsoring the Python Software Foundation. For Indeed, this is just the beginning of our work with open source initiatives.  […]

The post Open Source at Indeed: Sponsoring the Apache Software Foundation appeared first on Indeed Engineering Blog.


          Mesos at Indeed: Fostering Independence at Scale      Cache   Translate Page   Web Page Cache   

Independent teams are vital to Indeed development. With a rapidly growing organization, we strive to reduce the number of team dependencies. At Indeed, we let teams manage their own deployment infrastructure. This benefits velocity, quality, and architectural scalability. Apache Mesos helps us eliminate operational bottlenecks and empower teams to more fully own their products. The operations bottleneck During […]

The post Mesos at Indeed: Fostering Independence at Scale appeared first on Indeed Engineering Blog.


          javax.xml.rpc.ServiceException: java.lang.ClassNotFoundException      Cache   Translate Page   Web Page Cache   
I have created a web service in JDeveloper in java code. I have packaged and deployed a JAR to run on a local server. However, I'm getting the following error when I run from command line, but it runs just fine in JDev...

javax.xml.rpc.ServiceException: java.lang.ClassNotFoundException: proxy.Integ_Extratos_Portal_ServiceLocator
     at org.apache.axis.client.ServiceFactory.createService(ServiceFactory.java:321)
     at org.apache.axis.client.ServiceFactory.loadService(ServiceFactory.java:237)
     at proxy.Integ_Extratos_PortalPortClient.<init>(Integ_Extratos_PortalPortClient.java:18)
     at Oracle.main(Oracle.java:34)

I can't find any reference to the class Integ_Extratos_Portal_ServiceLocator in my project. Anybody come across this before?
I try to run in eclipse but i receive the same error.

Somebody can help me?

          Apache Kafka: A Framework for Handling Real-Time Data Feeds      Cache   Translate Page   Web Page Cache   
Apache Kafka: A Framework for Handling Real-Time Data Feeds

Apache Kafka is a distributed streaming platform. It is incredibly fast, which is why thousands of companies like Twitter, LinkedIn, Oracle, Mozilla and Netflix use it in production environments. It is horizontally scalable and fault tolerant. This article looks at its architecture and characteristics. Apache Kafka is a powerful asynchronous messaging technology originally developed by […]

The post Apache Kafka: A Framework for Handling Real-Time Data Feeds appeared first on Open Source For You.


          Software Engineer - Secret Clearance - Procession Systems - Reston, VA      Cache   Translate Page   Web Page Cache   
Experience with Java, Microsoft Internet Information Services (IIS), Apache Tomcat, and Adobe ColdFusion....
From Indeed - Mon, 06 Aug 2018 19:58:40 GMT - View all Reston, VA jobs
          Java Developer - ALTA IT Services, LLC - Clarksburg, WV      Cache   Translate Page   Web Page Cache   
Experience with the following technologies – J2EE, Weblogic, Java, Javascript, JQuery, AngularJS, Apache, Linux, Subversion, and GitHub....
From ALTA IT Services, LLC - Tue, 12 Jun 2018 17:33:52 GMT - View all Clarksburg, WV jobs
          LES COURSES JUSQU'AU 12 AOUT      Cache   Translate Page   Web Page Cache   

TROPHEE DES AS

DIMANCHE 12 AOUT

course camarguaiseMAUGUIO : 16 h 30, 11 €, CT Le Trident, dél. Garcia. Chr. VENTADOUR

Lautier : GERMINAL - Le Joncas : FETICHE - La Galère : TORONTO - Le Ternen : MUIRON - Blanc : SAINT-ELOI - Paulin : COLBERT

Groupe 1 - Coef. 1 - Raseteurs : Aliaga, Belgourari, Ciacchini, Errik, Gros, Katif, Rassir, Ouffe

PEROLS : 16 h 30, 11 €, mairie, dél. Gil. Chr. CYRIL

Rouquette : APARICIO - Saumade : MEDOC - Michel : GIGOLO - Plo : TRITON - Blanc : OURAZI - Chaballier : PAPALINO - Lautier : TIMOKO (hp)

Groupe 1 - Coef. 1 - Raseteurs : Allam, Bouhargane, Cadenas, Dunan, Marquis, Naïm

LES SAINTES-MARIES-DE-LA-MER : 17 h, 13 €, SASTaureaux des Saintes, dél. Servais. Chr. ANNELYSE

Finale du Trophée des Impériaux - Souvenir Paul-Garric

Laurent : LEBRAU - Cuillé : RUBICON - Fabre-Mailhan : NIMOIS - Nicollin : SUGAR - Bon : MONRO - Ricard : MARQUIS - Les Baumelles : PINO (hp)

Groupe 1 - Coef. 1,5 - Raseteurs : Ayme, Charrade, Félix, Four, Marignan, J.Martin, Robert, Zekraoui

TROPHEE DE L'AVENIR

VENDREDI 10 AOUT

UCHAUD : 16 h, 8 €, CT Lou Vovo, dél.Pradeilles. Chr. CYRIL

Guillierme : AUCELOUN - Lagarde : GIL - Saumade : ESTOUBLON - Blatière-Bessac : MADIBA - Saint-Pierre : DESGRESSAIRE - L’Amarée : MISTRAL - Aubanel-Baroncelli : PIERROT (hp)

Gr. 3 - Raseteurs : Bruschet, Clarion, Fougère, Gougeon, Miralles, Sanchez

ORGON : 17 h, 9 €, CT La Bouvine, dél.Mouiren. Chr. CELINE

Lautier : GITANO - Guillerme : ABELU - MOUSTACHE - Fabre-Mailhan : MONARQUE - AIGLON - Coulet : CIGALOUN - Lautier : MONTEGO (hp)

Gr. 3 - Raseteurs : Bernard, Bressy, Gaillardet, J.Martin, Marquier, Moutet

SAMEDI 11 AOUT

course camarguaiseFOURQUES : 16 h 30, 9 €, CTPR, dél.Molinie. Chr. CYRIL

Lautier : SARIGAN - Martini : VAUJANY - Le Brestalou : RAMIER - Ricard : ANIS - Michel : CALISTE - Blanc : FELIN

Groupe 3 - Raseteurs : Aliaoui, Ameraoui, Benhammou, El Mahboub
Gautier, Marquis, Moutet, Naïm

VAUVERT : 17 h, 9 €, CT L’Abrivado, dél. Castagnier. Chr. E. M.

34e Trophée Christian-Mestre - 7e Souvenir Nicole-Cartalade-Dumatras
Lautier : PECOULE - Nicollin : SABRAN - les Baumelles : VOLTAIRE - Le Ternen : CIBRANET - Raynaud : APACHE - Plo : ARMAGNAC

Groupe 3 - Raseteurs : Allam, Clarion, F. Garcia, Laurier, J.Martin
Marquier, Oudjit Ouffe

MAUSSANE-LES-ALPILLES : 17 h, 9 €, CTPR Vallée des Baux, dél. Di Cristofano. Chr. STEPHANIE

Les Baumelles : GAUGUIN - Richebois : CALEU - Navarro : RAPHELOIS - Saint-Pierre : VALLESCURE - Saliérène : PACHECO - Lautier : PARPAIOUN

Groupe 2 - Raseteurs : Bernard, Bressy, Ferriol, Gaillardet, Laurent, Martin-Cocher, Michelier, Sanchis

DIMANCHE 12 AOUT

LANSARGUES : 16 h 30, 9 €, CT Lou Garro, dél. Dumas. Chr. MALI

Souvenir P.-Gibert 1re j. - Janin : ARLEQUIN - Cuillé : VENTOUX - Saumade : JACOB - Rambier : MIRABEAU - Les Termes : FOUGUEUX - Le Brestalou : CADORET - Blatière-Bessac : CARUSO (hp)

Groupe 2 - Raseteurs : Castell, Chahboune, Faure, Jourdan, Laurier,  Y. Martin, Méric

FOURQUES : 16 h 30, 9 €, CTLou Chin Chei, dél. Crape. Chr. E. M.

Guillierme : AMISTADOU - Blanc-Espelly : CRIQUET - Plo : TAMARIN - Raynaud : GUERZY - Sylvéréal : CALISSE - Fournier : MISTRAL

Groupe 2 - Raseteurs : Aliaoui, Ameraoui, Benhammou, el Mahboub, A. Gautier, Villard

MONTFRIN : 16 h 30, 9 €, CT Lou Pougaou, dél. Quiot. Chr. REMY

Grand prix des artisans et commerçants

Guillierme : ESQUIROU - Allard : VEGAS - Saumade : GENEPY - Blatière-Bessac : JASMIN - Le Rhône : BRIANÇON - La Galère : ROCCIO - Ricard : PALUN (hp)

Groupe 2 - Raseteurs : Bakloul, Gaillardet, Guerrero, F. Garcia, Laurent, Moutet, Sanchis

course camarguaiseVAUVERT : 17 h, 9 €, Com. des festivités, dél. Pradeilles. Chr. PANI

17e Trophée des Vignerons, 2e j. - Les Baumelles : SOUVIGNARGUAIS - Saumade : HELIOS - Raynaud : GROGNARD - Occitane : DOUANIER - Aubanel-Baroncelli : CETTORI - Félix : SYLVIO - Lagarde : GALAAD (hp)

Groupe 2 - Raseteurs : Assenat, Aroca, Auzolle, Chig, Fouad, Ferriol Marquier, Soler

EYGUIERES : 16 h 30, 9, CT La Bouvine, dél. Rachtan. Chr. VINCENT

Finale du Trophée des Opies - Saint-Antoine : ROUMIE - Le Joncas : ELIXIR - Gillet : AMADEUS - Agu : SOCRATE - Ricard : LOZERIEN - Raynaud : MACHAIRE - Bon : VINCENT (hp)

Groupe 2 - Raseteurs : Brunel, F. Lopez, Martin-Cocher, Matéo, Oudjit, Pradier, Sabot

PALUDS-DE-NOVES : 16 h 30, 9 €, CT des Paluds, dél.Mouiren.
Chr. ANGELIQUE

Saint-Roch, 4e j. - 19e Souvenir Beltrando - 8e souvenir R.-Pauleau

Guillierme : BALAIRE - Fabre-Mailhan : FELIBRE - La Galère : LOU BARRI - Ricard : MORILLION - Plo : URUBU - Didelot-Langlade : JASMIN - Cuillé : N.310 hp

Gr. 3 - Raseteurs : Bernard, Bressy, Chebaiki, N. Favier, Michelier, Moine

AUTRES COURSES

VENDREDI 10 AOUT

LE CAILAR : 16 h 30, gratuit, mairie, dél. Blanc. Vaches jeunes de Nicollin. Raseteurs : L.Garcia, Charnelet, Caizergues, D. Martinez, Lafare.

PEROLS : 18 h, gratuit, mairie, dél. Gil. Taureaux jeunes de Sauvan, Vellas, Le Soleil, Rambier, Vinuesa, Chaballier, Michel, Rouquette. Groupe 2. Raseteurs : Bakloul, Ameraoui, Dunan, Marquis, Errik, Auzolle. Tourneurs : Dunan, Benafitou

SAINT-REMY-DE-PROVENCE : 22 h, 10 €, UTPR, dél.Ayme. Etalons de Blanc, Raynaud, Agu, Cuillé. Groupe 2. Raseteurs : Matéo, Alarcon, Boudouin, A. Gautier, Ferriol, Moine.

SAMEDI 11 AOUT

PALUDS-DE-NOVES : 16 h 30, 9 €, CT des Paluds, dél.Mouiren. Etalons neufs de Didelot-Langlade, Orgonens, Nicollin, Le Joncas. Groupe 2. Raseteurs : Matéo. L. Garcia, Boyer, Boudouin, Douville, Moine.

DIMANCHE 12 AOUT

UCHAUD : 16 h, gratuit, mairie, dél. Fabre. Trophée des Vaches cocardières. La Galère : SARAH - Nicollin : RASCASSE - Chapelle : OCTOPUSSY - Ricard : PALUNETTE - Blatière-Bessac : ANTOINETTE - Raynaud : VENUS - Chaballier : ALBIZZIA - Saint-Pierre : SISEMPE.Raseteurs : Sanchez, Gougeon, Fougère, Clarion, Miralles.

LE CAILAR : 16 h 30, gratuit, mairie, dél. Blanc.Taureaux jeunes de Nicollin. Groupe 2. Raseteurs : L.Garcia, Charnelet, Caizergues, Rey, Lafare.

AUBAIS : 16 h 30, 8 €, CT La Bourgino, dél. Castillo.Etalons et chatres neufs d’Occitane, Les Baumelles, La Galère. Groupe 2. Raseteurs : Pinter, Alarcon, Boyer, Douville.

LIGUES

VENDREDI 10 AOUT

REMOULINS : 17 h, 5 €, Union taurine, dél. Allemand. Manades Raynaud, Guillierme, Lautier. Raseteurs : Izard, Denis, Friakh, Assenat, Danna, Guerrero. Tourneurs : Khaled, P. Rado.

SAMEDI 11 AOUT

UCHAUD : 16 h, 5 €, CT Lou Vovo, dél. Fabre. Manades Blatière-Bessac, Saumade, Guillierme. Raseteurs : Boualam, Diniakos, K. Martinez, Cugnière-Tourreau, Meseguer, Viscomi. Tourneurs : Dumont, T. Mondy.

LE CAILAR : 16 h 30, gratuit, mairie, dél. Blanc. Manade Blatière-Bessac. Raseteurs : Castillo, Youmouri, L.Lopez, T. Roux, A. Roux, Lassere. Tourneurs : Roux, Lebrun.


          How we designed the Quotas microservice to prevent resource abuse      Cache   Translate Page   Web Page Cache   

How we designed the Quotas microservice to prevent resource abuse

As the business has grown, Grab’s infrastructure has changed from a monolithic service to dozens of microservices. And that number will soon be expressed in hundreds. As our engineering team grows in parallel, having a microservice framework provides benefits such as higher flexibility, productivity, security, and system reliability. Teams define Service Level Agreements (SLA) with their clients, meaning specification of their service’s API interface and its related performance metrics. As long as the SLAs are maintained, individual teams can focus on their services without worrying about breaking other services.

However, migrating to a microservice framework can be tricky - due to the the large number of services and having to communicate between them. Problems that are simple to solve or don’t exist for a monolithic service such as service discovery, security, load balancing, monitoring, and rate limiting are challenging for a microservice based framework. Reliable, scalable, and high performing solutions for common system level issues are essential for microservice success, and there is a Grab-wide initiative to provide those common solutions.

As an important component of the initiative, we wrote a microservice called Quotas, a highly scalable API request rate limiting solution to mitigate the problems of service abuse and cascading service failures. In this article, we discuss the challenges Quotas addresses, how we designed it, and the end results. 

What Quotas tries to address

Rate-limiting is an well-known concept, used by many companies for years. For example, telecommunication companies and content providers frequently throttle requests from abusive users by using popular rate-limiting algorithms such as leaky bucket, fixed window, sliding log, sliding window, etc. All of these avoid resource abuse and protect important resources. Companies have also developed rate limiting solutions for inter-service communications, such as Doorman (https://github.com/youtube/doorman/blob/master/doc/design.md), Ambassador (https://www.getambassador.io/reference/services/rate-limit-service), etc, just to name a few.

Rate limiting can be enforced locally or globally. Local rate limiting means an instance accumulates API request information and makes decisions locally, with no coordination required. For example, a local rate limiting strategy can specify that each service instance can serve up to 1000 requests per second for an API, and the service instance will keep a local time-aware request counter. Once the number of received requests exceeds the threshold, it will reject new requests immediately until the next time bucket with available quota. Global rate limiting means multiple instances share the same enforcement policy. With global rate limiting, regardless of the service instance a client calls, it will be subjected to the same global API quota. Global rate limiting ensures there is a global view and it is preferred in many scenarios. In a cloud context, with auto scaling policy setup, the number of instances for a service can increase significantly during peak traffic hours. If only local rate limiting is enforced, the accumulative effect can still put great pressure on critical resources such as databases, network, or downstream services and the cumulative effects can cause service failures.

However, to support global rate limiting in a distributed environment is not easy, and it becomes even more challenging when the number of services and instances increases. To support a global view, Quotas needs to know how many requests a client service A (i.e., service A is a client of Quotas) is getting now on an endpoint comparing to the defined thresholds. If the number of requests is already over the thresholds, Quotas service should help to block a new request before service A executes its main logic. By doing that, Quotas service helps service A protect resources such as CPU, memory, database, network, and its downstream services, etc. To track the global request counts on service endpoints, a centralized data store such as Redis or Dynamo is generally used for the aggregation and decision making. In addition, decision latency and scalability become major concerns if each request needs to make a call to the rate limiting service (i.e., Quotas) to decide if the request should be throttled. And if that is the case, the rate limiting service will be on the critical path of every request and it will be a major concern for services. That is the scenario we absolutely wanted to avoid when designing Quotas service.

Designing Quotas

Quotas ensures Grab internal services can guarantee their service level agreement (SLA) by throttling “excessive” API requests made to them, thereby avoiding cascading failures . By rejecting these calls early through throttling, services can be protected from depleting critical resources such as databases, computation resources, etc.

The two main goals for Quotas are:

  • Help client services throttle excessive API requests in a timely fashion.

  • Minimize latency impacts on client services, i.e., client services should only see negligible latency increase on API response time.

We followed these design guidelines:

  1. Providing a thin client implementation. Quotas service should keep most of the processing logic at the service side. Once we release a client SDK, it’s very hard to track who’s using what version and to update every client service with a new client SDK version. Also, more complex client side logic increases the chances of introducing bugs.

  2. To allow scaling of Quotas service, we use an asynchronous processing pipeline instead of a synchronous one (i.e., client service makes calls Quotas for every API request). By asynchronously processing events, a client service can immediately decide whether to throttle an API request when it comes in, without delaying the response too much.

  3. Allowing for horizontal scaling through config changes. This is very important since the goal is to onboard all Grab internal services.

Figure 1 is a high-level system diagram for Quotas’ client and server side interactions. Kafka sits at the core of the system design. Kafka is an open-source distributed streaming platform under the Apache license and it’s widely adopted by the industry (https://kafka.apache.org/intro). Kafka is used in Quotas system design for the following purposes:

  1. Quotas client services (i.e., services B and C in Figure 1) send API usage information through a dedicated Kafka topic and Quotas service consumes the events and performs its business logic.

  2. Quotas service sends rate-limiting decisions through application-specific Kafka topics and the Quotas client SDKs running on the client service instances consume the rate-limiting events and update the local in-memory cache for rate-limiting decisions. For example, Quotas service uses topic names such as “rate-limiting-service-b” for rate-limiting decisions with service B and “rate-limiting-service-c” for service C.

  3. An archiver is running with Kafka to archive the events to AWS S3 buckets for additional analysis.

Figure 1: Quotas High-level System Design Figure 1: Quotas High-level System Design

The details of Quotas client side logic is shown in Figure 2 using service B as an example. As it shows, when a request comes in (e.g., from service A), service B will perform the following logic:

  1. Quotas middleware running with service B
    1. intercepts the request and calls Quotas client SDK for the rate limiting decision based on API and client information.
      1. If it throttles the request, service B returns a response code indicating the request is throttled.
      2. If it doesn't throttle the request, service B handles it with its normal business logic.
    2. asynchronously sends the API request information to a Kafka topic for processing.
  2. Quotas client SDK running with service B
    1. consumes the application-specific rate-limiting Kafka stream and updates its local in-memory cache for new rate-limiting decisions. For example, if the previous decision is true (i.e., enforcing rate limiting), and the new decision from the Kafka stream is false, the local in-memory cache will be updated to reflect the change. After that, if a new request comes in from service A, it will be allowed to go through and served by service B.
    2. provides a single public API to read the rate limiting decision based on API and client information. This public API reads the decisions from its local in-memory cache.
Figure 2: Quotas Client Side Logic Figure 2: Quotas Client Side Logic

Figure 3 shows the details of Quotas server side logic. It performs the following business logic:

  • Consumes the Kafka stream topic for API request information

  • Performs aggregations on the API usages

  • Stores the stats in a Redis cluster periodically

  • Makes a rate-limiting decision periodically

  • Sends the rate-limiting decisions to an application-specific Kafka stream

  • Sends the stats to DataDog for monitoring and alerting periodically

In addition, an admin UI is available for service owners to update thresholds and the changes are picked up immediately for the upcoming rate-limiting decisions.

Figure 3: Quotas Server Side Logic Figure 3: Quotas Server Side Logic

Implementation decisions and optimizations

On the client service side (service B in the above diagrams), the Quotas client SDK is initialized when service B instance is initialized. The Quotas client SDK is a wrapper that consumes Kafka rate-limiting events and writes/reads the in-memory cache. It exposes a single API to check the rate-limiting decisions on a client with a given API method. Also, service B is hooked up with Quotas middleware to intercept API requests. Internally, it calls the Quotas client SDK API to determine if it should allow/reject the requests before the actual business logic. Currently, Quotas middleware supports both gRPC and REST protocols.

Quotas utilizes a company-wide streaming solution called Sprinkler for the Kafka stream Producer and Consumer implementations. It provides streaming SDKs built on top of sarama (an MIT-license Go library for Apache Kafka), providing asynchronous event sending/consuming, retry, and circuit breaking capabilities.

Quotas provides throttling capabilities based on the sliding window algorithm on the 1-second and 5-second levels. To support extremely high TPS demands, most of Quotas intermediate operations are designed to be done asynchronously. Internal benchmarks show the delay for enforcing a rate-limiting decision is up to 200 milliseconds. By combining 1-second and 5-second level settings, client services can more effectively throttle requests.

During system implementation, we find that if Quotas instances make a call to the Redis cluster every time it receives an event from the Kafka API usage stream, the Redis cluster will quickly become a bottleneck due to the amount of calculations. By aggregating API usage stats locally in-memory and calling Redis instances periodically (i.e., every 50 ms), we can significantly reduce Redis usage and still keep the overall decision latency at a relatively low level. In addition, we designed the hash keys in a way to make sure requests are evenly distributed across Redis instances.

Evaluation and benchmarks

We did multiple rounds of load tests, both before and after launching Quotas, to evaluate its performance and find potential scaling bottlenecks. After the optimization efforts, Quotas now gracefully handles 200k peak production TPS. More importantly, critical system resource usage for Quotas’ application server, Redis and Kafka are still at a relatively low level, suggesting that Quotas can support much higher TPS before the need to scale up.

Quotas current production settings are:

  1. 12 c5.2xlarge (8 vCPU, 16GB) AWS EC2 instances

  2. 6 cache.m4.large (2 vCPU, 6.42GB, master-slave) AWS ElasticCaches

  3. Shared Kafka cluster with other application topics

Figures 4 & 5 show a typical day’s CPU usage for the Quotas application server and Redis Cache respectively. With 200k peak TPS, Quotas handles the load with peak application server CPU usage at about 20% and Redis CPU usage of 15%. Due to the nature of Quotas data usage, most of the data stored in Redis cache is time sensitive and stored with time-to-live (TTL) values.

However, because of how Redis expires keys (https://redis.io/commands/expire) and the amount of time-sensitive data Quotas stores in Redis, we have implemented a proprietary cron job to actively garbage collect expired Redis keys. By running the cron job every 15 minutes, Quotas keeps the Redis memory usage at a low level.

Figure 4: Quotas CPU Usage Figure 4: Quotas CPU Usage
Figure 5: Quotas Redis CPU Usage Figure 5: Quotas Redis CPU Usage

We have conducted load tests to identify the potential issues for scaling Quotas. The tests have shown that we can horizontally scale Quotas to support extremely high TPS using only configuration changes:

  1. Kafka is well known for its high throughput, low-latency, high scalability characteristics. By either increasing the number of partitions on Quotas API usage topic or adding more Kafka nodes, the system can evenly distribute and handle additional load.

  2. All Quotas application servers form a consumer group (CG) to consume the Kafka API usage topic (partitioned based on the number of instance expectations). Whenever an instance starts or goes offline, the topic partitions are re-distributed among the application servers. This allows balanced topic partition consumptions and thus somewhat evenly distributed application server CPU and memory usages. 

  3. We have also implemented a consistent hashing based algorithm to support multiple Redis instances. It supports easy Redis instances addition or removal by configuration changes. With well chosen hash keys, load can be evenly distributed to the Redis instances.

With the above design and implementations, all the critical Quotas components can be easily scaled and extended when a bottleneck occurs either at Kafka, application server, or Redis levels.

Roadmap for Quotas

Quotas is currently used by more than a dozen internal Grab services, and soon all Grab internal services will use it.

Quotas is part of the company-wide ServiceMesh effort to handle service discovery, load balancing, circuit breaker, retry, health monitoring, rate-limiting, security, etc. consistently across all Grab services.


          A Distributed Classifier for MicroRNA Target Prediction with Validation Through TCGA Expression Data      Cache   Translate Page   Web Page Cache   
Background: MicroRNAs (miRNAs) are approximately 22-nucleotide long regulatory RNA that mediate RNA interference by binding to cognate mRNA target regions. Here, we present a distributed kernel SVM-based binary classification scheme to predict miRNA targets. It captures the spatial profile of miRNA-mRNA interactions via smooth B-spline curves. This is accomplished separately for various input features, such as thermodynamic and sequence-based features. Further, we use a principled approach to uniformly model both canonical and non-canonical seed matches, using a novel seed enrichment metric. Finally, we verify our miRNA-mRNA pairings using an Elastic Net-based regression model on TCGA expression data for four cancer types to estimate the miRNAs that together regulate any given mRNA. Results: We present a suite of algorithms for miRNA target prediction, under the banner Avishkar, with superior prediction performance over the competition. Specifically, our final kernel SVM model, with an Apache Spark backend, achieves an average true positive rate (TPR) of more than 75 percent, when keeping the false positive rate of 20 percent, for non-canonical human miRNA target sites. This is an improvement of over 150 percent in the TPR for non-canonical sites, over the best-in-class algorithm. We are able to achieve such superior performance by representing the thermodynamic and sequence profiles of miRNA-mRNA interaction as curves, devising a novel seed enrichment metric, and learning an ensemble of miRNA family-specific kernel SVM classifiers. We provide an easy-to-use system for large-scale interactive analysis and prediction of miRNA targets. All operations in our system, namely candidate set generation, feature generation and transformation, training, prediction, and computing performance metrics are fully distributed and are scalable. Conclusions: We have developed an efficient SVM-based model for miRNA - arget prediction using recent CLIP-seq data, demonstrating superior performance, evaluated using ROC curves for different species (human or mouse), or different target types (canonical or non-canonical). We analyzed the agreement between the target pairings using CLIP-seq data and using expression data from four cancer types. To the best of our knowledge, we provide the first distributed framework for miRNA target prediction based on Apache Hadoop and Spark. Availability: All source code and sample data are publicly available at https://bitbucket.org/cellsandmachines/avishkar. Our scalable implementation of kernel SVM using Apache Spark, which can be used to solve large-scale non-linear binary classification problems, is available at https://bitbucket.org/cellsandmachines/kernelsvmspark.
          (USA-OR-Beaverton) Senior Application Engineer      Cache   Translate Page   Web Page Cache   
Become a Part of the NIKE, Inc. Team NIKE, Inc. does more than outfit the world's best athletes. It is a place to explore potential, obliterate boundaries and push out the edges of what can be. The company looks for people who can grow, think, dream and create. Its culture thrives by embracing diversity and rewarding imagination. The brand seeks achievers, leaders and visionaries. At Nike, it’s about each person bringing skills and passion to a challenging and constantly evolving game. Nike Technology designs, creates and implements the methods and tools needed to make the world’s largest sports brand run faster, smarter and more securely. Global Technology teams aggressively innovate the solutions needed to help employees navigate Nike's rapidly evolving landscape. From infrastructure to security and supply chain operations, Technology specialists drive growth through top-flight hardware, software and enterprise applications. Simply put, without Nike Technology, there are no Nike products. **Description** Nike Tech brings together technology and process expertise to create value for the consumer. We deliver one-stop, integrated process and technology capabilities that enable Nike, Inc.'s businesses and brands worldwide. Our focus is on providing Lean solutions that eliminate waste, maximize consumer value, and drive profitable business growth. As a Senior Software Engineer, you will: • Work within an agile team to develop new software solutions for the North America business • Work on enhancing and refactoring existing applications in various legacy technology stacks • Provide support for a catalog of existing/legacy applications already in use. • Promote, practice, and cultivate Dev Ops principles, including containerization, testing automation, Continuous Delivery (CD), and Infrastructure as Code (IaC) • Ensure new solutions are designed and developed using scalable, modular, highly resilient cloud architectures • Ensure product and technical features are delivered to spec and on-time • Develop tools and frameworks to improve security, reliability, maintainability, availability and performance for the technology foundation of our platform • Be a key contributor to overall solutions architecture **Qualifications** To make it clear, we're not looking for just anyone. We're looking for someone special, someone who had these experiences and clearly demonstrated these skills: • Masters or Bachelor’s degree in Computer Science or a related field • Exceptional collaboration, listening and verbal communication skills to effectively communicate with business and technical IT communities • Skill in mapping or understanding business processes including user story development, estimating, and data modeling • Our ideal candidate has strong capability in the following areas o Solutions Development (web/services/mobile/frameworks) o Production Support (triage/solve/deploy) o DevOps (build/test/containerize/deploy) o Database technologies (administration/development) o Cloud Technologies (AWS/Azure) o Server Administration (Windows/Linux) • 7+ years of experience in developing robust, highly scalable, web-based enterprise solutions, services, and frameworks • 3+ years of experience developing solutions with Microsoft .NET framework, including ASP.NET forms, ASP.NET MVC, VB.NET • 2+ years experience supporting, debugging, and refactoring existing applications (web focus), including classic ASP, VBScript, Python, and Node.js • 2+ years developing solutions with Java • 1+ years REACT.js or Angular • 2+ years administering Windows servers, including troubleshooting/ configuring IIS, user administration, etc. • 1+ years experience with Linux administration, including Apache administration, package installations, etc. • 2+ years DevOps experience o Ability to build a Continuous Integration (CI) pipeline including project build, test automation, deployment, etc. o Experience with IaC using Terraform/Cloud Formation o Experience with PowerShell, including building scripts for automation of server configuration • 1+ years experience with Azure, including configuring servers, services via Portal • 2+ years of Hands-on experience with AWS, including EC2, S3, DynamoDB, Aurora, ElasticSearch, Snowflake, RDS, SQS, SNS and Lambda (Node.js) • 2+ years experience with MS SQL Server o Administration (user administration, indexing, jobs, etc.) o Schema design o Comprehend various SQL schemas, write/debug Stored Procedures, and perform basic SQL administration • 1+ year building reports with SQL Server Reporting Services (SSRS) and Tableau Additionally - • Strong Experience with both relational and No-SQL databases • Experience with Docker, Kubernetes or other container technologies • Experience with participating in projects in a highly collaborative, multi-discipline development team environment • Exposure to Agile and test-driven development, ideally knowledge of the SAFe methodology • Exposure to hierarchical and distributed code repository management tools like GIT • Experience with mobile development (Objective-C, Kotlin, Swift, etc.) • User Interface (UI) or User Experience (UX) design Experience in using the following tools • Visual Studio (2008+) • Unit testing framework (NUnit, etc.) • Mocking engine (NMock, JustMock, etc.) • SQL Server Management Studio (SSMS) • TeamCity • Docker • Octopus • Terraform • VersionOne • Git • Telerik toolset (including Kendo) • IntelliJ • Postman • SoapUI • Redgate toolset • AWS Management Console • Azure Portal **Qualifications** To make it clear, we're not looking for just anyone. We're looking for someone special, someone who had these experiences and clearly demonstrated these skills: • Masters or Bachelor’s degree in Computer Science or a related field • Exceptional collaboration, listening and verbal communication skills to effectively communicate with business and technical IT communities • Skill in mapping or understanding business processes including user story development, estimating, and data modeling • Our ideal candidate has strong capability in the following areas o Solutions Development (web/services/mobile/frameworks) o Production Support (triage/solve/deploy) o DevOps (build/test/containerize/deploy) o Database technologies (administration/development) o Cloud Technologies (AWS/Azure) o Server Administration (Windows/Linux) • 7+ years of experience in developing robust, highly scalable, web-based enterprise solutions, services, and frameworks • 3+ years of experience developing solutions with Microsoft .NET framework, including ASP.NET forms, ASP.NET MVC, VB.NET • 2+ years experience supporting, debugging, and refactoring existing applications (web focus), including classic ASP, VBScript, Python, and Node.js • 2+ years developing solutions with Java • 1+ years REACT.js or Angular • 2+ years administering Windows servers, including troubleshooting/ configuring IIS, user administration, etc. • 1+ years experience with Linux administration, including Apache administration, package installations, etc. • 2+ years DevOps experience o Ability to build a Continuous Integration (CI) pipeline including project build, test automation, deployment, etc. o Experience with IaC using Terraform/Cloud Formation o Experience with PowerShell, including building scripts for automation of server configuration • 1+ years experience with Azure, including configuring servers, services via Portal • 2+ years of Hands-on experience with AWS, including EC2, S3, DynamoDB, Aurora, ElasticSearch, Snowflake, RDS, SQS, SNS and Lambda (Node.js) • 2+ years experience with MS SQL Server o Administration (user administration, indexing, jobs, etc.) o Schema design o Comprehend various SQL schemas, write/debug Stored Procedures, and perform basic SQL administration • 1+ year building reports with SQL Server Reporting Services (SSRS) and Tableau Additionally - • Strong Experience with both relational and No-SQL databases • Experience with Docker, Kubernetes or other container technologies • Experience with participating in projects in a highly collaborative, multi-discipline development team environment • Exposure to Agile and test-driven development, ideally knowledge of the SAFe methodology • Exposure to hierarchical and distributed code repository management tools like GIT • Experience with mobile development (Objective-C, Kotlin, Swift, etc.) • User Interface (UI) or User Experience (UX) design Experience in using the following tools • Visual Studio (2008+) • Unit testing framework (NUnit, etc.) • Mocking engine (NMock, JustMock, etc.) • SQL Server Management Studio (SSMS) • TeamCity • Docker • Octopus • Terraform • VersionOne • Git • Telerik toolset (including Kendo) • IntelliJ • Postman • SoapUI • Redgate toolset • AWS Management Console • Azure Portal NIKE, Inc. is a growth company that looks for team members to grow with it. Nike offers a generous total rewards package, casual work environment, a diverse and inclusive culture, and an electric atmosphere for professional development. No matter the location, or the role, every Nike employee shares one galvanizing mission: To bring inspiration and innovation to every athlete* in the world. NIKE, Inc. is committed to employing a diverse workforce. Qualified applicants will receive consideration without regard to race, color, religion, sex, national origin, age, sexual orientation, gender identity, gender expression, veteran status, or disability. **Job ID:** 00388412 **Location:** United States-Oregon-Beaverton **Job Category:** Technology
          84.81%+ CAGR of Hadoop-as-a-Service Market By Top Companies AWS, IBM, Microsoft, EMC, Google, HP Report Forecast 2023      Cache   Translate Page   Web Page Cache   
84.81%+ CAGR of Hadoop-as-a-Service Market By Top Companies AWS, IBM, Microsoft, EMC, Google, HP Report Forecast 2023 Hadoop is a provisioning model offered to organizations seeking to incorporate a hosted implementation of the Hadoop platform. Apache Hadoop is an open-source software platform that uses the MapReduce technology to perform distributed computations on various hardware servers. Hadoop-as-a-service (HDaaS)

          Hadoop Developer with Java - Allyis Inc. - Seattle, WA      Cache   Translate Page   Web Page Cache   
Working knowledge of big data technologies such as Apache Flink, Nifi, Spark, Presto, Elastic Search, DynamoDB and other relational data stores....
From Dice - Sat, 28 Jul 2018 03:49:51 GMT - View all Seattle, WA jobs
          Sr Software Engineer - Hadoop / Spark Big Data - Uber - Seattle, WA      Cache   Translate Page   Web Page Cache   
Under the hood experience with open source big data analytics projects such as Apache Hadoop (HDFS and YARN), Spark, Hive, Parquet, Knox, Sentry, Presto is a...
From Uber - Sun, 13 May 2018 06:08:42 GMT - View all Seattle, WA jobs
          Software Development Engineer - Big Data Platform - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
Experience with Big Data technology like Apache Hadoop, NoSQL, Presto, etc. Amazon Web Services is seeking an outstanding Software Development Engineer to join...
From Amazon.com - Wed, 08 Aug 2018 19:26:05 GMT - View all Seattle, WA jobs
          Sr. Technical Account Manager - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
You can also run other popular distributed frameworks such as Apache Spark, Apache Flink, and Presto in Amazon EMR;...
From Amazon.com - Wed, 01 Aug 2018 01:21:56 GMT - View all Seattle, WA jobs
          Warning Conflict:      Cache   Translate Page   Web Page Cache   

I am getting this warning code from the dashboard of WordPress:

Warning: is_readable(): open_basedir restriction in effect. File(/nfs/c11/h07/mnt/206500/domains/5mrealty.com/html/5MRealty/wp-content/plugins/testimonial-free/testimonial-free.php/languages/testimonial-free-en_US.mo) is not within the allowed path(s): (/nfs:/tmp:/usr/local:/etc/apache2/gs-bin) in /nfs/c11/h07/mnt/206500/domains/5mrealty.com/html/5MRealty/wp-includes/l10n.php on line 584

Warning: Cannot modify header information – headers already sent by (output started at /nfs/c11/h07/mnt/206500/domains/5mrealty.com/html/5MRealty/wp-includes/l10n.php:584) in /nfs/c11/h07/mnt/206500/domains/5mrealty.com/html/5MRealty/wp-admin/includes/misc.php on line 1126

I tried troubleshooting the by turning off all plugins and by using the default Twenty Sixteen theme. But I continued to receive the same warning.


          Reply To: wp spamshield      Cache   Translate Page   Web Page Cache   

				### WordPress ###

Version: 4.9.8
Language: en_US
Permalink structure: /%postname%/
Is this site using HTTPS?: Yes
Can anyone register on this site?: No
Default comment status: open
Is this a multisite?: No
User Count: 1
Communication with WordPress.org: WordPress.org is reachable
Create loopback requests: The loopback request to your site failed, this may prevent WP_Cron from working, along with theme and plugin editors.<br>Error encountered: (0) cURL error 60: SSL certificate problem: self signed certificate

### Installation size ###

Uploads Directory: 45.95 MB
Themes Directory: 12.56 MB
Plugins Directory: 72.61 MB
Database size: 2.30 MB
Whole WordPress Directory: 0.00 B
Total installation size: 2.30 MB- Some errors, likely caused by invalid permissions, were encountered when determining the size of your installation. This means the values represented may be inaccurate.

### Active Theme ###

Name: OceanWP
Version: 1.5.23
Author: Nick
Author website: https://oceanwp.org/about-me/
Parent theme: Not a child theme
Supported theme features: post-thumbnails, menus, gutenberg, post-formats, title-tag, automatic-feed-links, custom-header, custom-logo, html5, woocommerce, wc-product-gallery-zoom, wc-product-gallery-lightbox, wc-product-gallery-slider, editor-style, customize-selective-refresh-widgets, widgets

### Other themes (3) ###

Twenty Fifteen (twentyfifteen): Version 2.0 by the WordPress team
Twenty Seventeen (twentyseventeen): Version 1.7 by the WordPress team
Twenty Sixteen (twentysixteen): Version 1.5 by the WordPress team

### Must Use Plugins (1) ###

Health Check Troubleshooting Mode: Version 1.5.0

### Active Plugins (24) ###

Advanced noCaptcha & invisible Captcha: Version 2.7 by Shamim
Better Search Replace: Version 1.3.2 by Delicious Brains
Cloudflare: Version 3.3.2 by John Wineman, Furkan Yilmaz, Junade Ali (Cloudflare Team)
Contact Form 7: Version 5.0.3 by Takayuki Miyoshi
Custom Sidebars: Version 3.1.6 by WPMU DEV
Elementor: Version 2.1.6 by Elementor.com
Elementor Addons & Templates - Sizzify Lite: Version 1.2.5 by ThemeIsle
Essential Addons for Elementor: Version 2.7.5 by Codetic
Health Check & Troubleshooting: Version 1.2.1 by The WordPress.org community
Ocean Custom Sidebar: Version 1.0.4 by OceanWP
Ocean Extra: Version 1.4.20 by OceanWP
Ocean Social Sharing: Version 1.0.13 by OceanWP
Ocean Stick Anything: Version 1.0.2 by OceanWP
Passwordless Login: Version 1.0.7 by Cozmoslabs, sareiodata
Premium Addons for Elementor: Version 2.5.4 by Leap13 ( Latest version: 2.5.5 )
Printful Integration for WooCommerce: Version 2.0.4 by Printful
Profile Builder: Version 2.8.7 by Cozmoslabs
Really Simple SSL: Version 3.0.5 by Rogier Lankhorst, Mark Wolters
SSL Insecure Content Fixer: Version 2.7.0 by WebAware
WooCommerce: Version 3.4.4 by Automattic
WooCommerce Stripe Gateway: Version 4.1.8 by WooCommerce
WooCommerce Variation Swatches: Version 1.0.34 by Emran Ahmed
WooCommerce Wishlist Plugin: Version 1.8.9 by TemplateInvaders
Yoast SEO: Version 7.9.1 by Team Yoast

### Media handling ###

Active editor: WP_Image_Editor_GD
Imagick Module Version: Imagick not available
ImageMagick Version: Imagick not available
GD Version: bundled (2.1.0 compatible)
Ghostscript Version: Unable to determine if Ghostscript is installed

### Server ###

Server architecture: Linux 2.6.32-896.16.1.lve1.4.54.el6.x86_64 x86_64
PHP Version: 7.0.19 (Supports 64bit values)
PHP SAPI: apache2handler
PHP max input variables: 8000
PHP time limit: 20
PHP memory limit: 158M
Max input time: 30
Upload max filesize: 10M
PHP post max size: 20M
cURL Version: 7.48.0 OpenSSL/1.0.1e
SUHOSIN installed: No
Is the Imagick library available: No
htaccess rules: Your htaccess file only contains core WordPress features

### Database ###

Extension: mysqli
Server version: 5.6.35-81.0
Client version: mysqlnd 5.0.12-dev - 20150407 - $Id: b5c5906d452ec590732a93b051f3827e02749b83 $
Database prefix: wpej_

### WordPress Constants ###

ABSPATH: /home/vol13_5/epizy.com/epiz_22535368/htdocs/
WP_HOME: Undefined
WP_SITEURL: Undefined
WP_DEBUG: Disabled
WP_MAX_MEMORY_LIMIT: 158M
WP_DEBUG_DISPLAY: Enabled
WP_DEBUG_LOG: Disabled
SCRIPT_DEBUG: Disabled
WP_CACHE: Disabled
CONCATENATE_SCRIPTS: Undefined
COMPRESS_SCRIPTS: Undefined
COMPRESS_CSS: Undefined
WP_LOCAL_DEV: Undefined

### Filesystem Permissions ###

The main WordPress directory: Writable
The wp-content directory: Writable
The uploads directory: Writable
The plugins directory: Writable
The themes directory: Writable
The Must Use Plugins directory: Writable


          PyCharm 2018.2.1 - Python IDE with complete set of tools. (Shareware)      Cache   Translate Page   Web Page Cache   

PyCharm is a Python IDE with complete set of tools for productive development with the Python programming language. In addition, the IDE provides high-class capabilities for professional Web development with the Django framework.

Following the release of version 3, PyCharm forked into two paths: a free, Open-Source Community Edition; and the commercial, full-featured Professional Edition. Here are a few highlights of the different forks:

Professional Edition

  • Full-featured IDE for Python & Web development
  • Supports Django, Flask, Google App Engine, Pyramid, web2py
  • JavaScript, CoffeeScript, TypeScript, CSS, Cython, Template languages and more
  • Remote development, Databases and SQL support, UML, and SQLAlchemy Diagrams
Community Edition
  • Lightweight IDE for Python development only
  • Free, open-source, Apache 2 license
  • Intelligent Editor, Debugger, Refactorings, Inspections, VCS integration
  • Project Navigation, Testing support, Customizable UI, Vim key bindings
You can compare the forks here.

The quoted price is that for individual customers, paid annually. PyCharm is available at several price points. Above price reflects one year subscription for individual user; see this page for more information.



Version 2018.2.1:
  • Release notes were unavailable when this listing was updated.


  • OS X 10.8 or later
  • Python 2.4 or later, Jython, PyPy, or IronPython



More information

Download Now
          Install Issue v2.0.0      Cache   Translate Page   Web Page Cache   

Synology Diskstation with PHP 7.0 and Apache 2.4.

I have Fusioninvoice installed before (2018-08), thats it.

The error is everywhere not just local.


          Programador - KORPORATE TECHNOLOGIES - Madrid, España      Cache   Translate Page   Web Page Cache   
Requisitos: Técnicos: Más de 1 año de experiencia en programación en .net, C# ASP .NET HTML5 CSS 3 Visual Studio SQL Conocimientos de informática (Sistemas, redes) Conocimientos en bases de datos: MSQL MySQL Se valoran conocimientos en: JSON Jquery Linux Apache PHP Se valorará experiencia en software: PaperCut PlanetPress Experiencia técnica en soluciones de gestión documental e impresión. Requisitos funcionales: Metodología de trabajo estructurada y...
          Programador Junior - KORPORATE TECHNOLOGIES - Madrid, España      Cache   Translate Page   Web Page Cache   
Buscamos Programador Junior. Requisitos: Requisitos técnicos: Más de 1 año de experiencia en programación en .net, C# ASP .NET HTML5 CSS 3 Visual Studio SQL Conocimientos de informática (Sistemas, redes) Conocimientos en bases de datos: MSQL MySQL Se valorán conocimientos en: JSON Jquery Linux Apache PHP Se valorará experiencia en software: PaperCut PlanetPress Experiencia técnica en soluciones de gestión documental e impresión. Requisitos...
          VARIOUS - Dirty Beats/Underground Drum & Bass For Experts (Breakdrum Recordings)       Cache   Translate Page   Web Page Cache   
Title: Dirty Beats/Underground Drum & Bass For Experts
Artist: VARIOUS
Label: Breakdrum Recordings
Format: 192kb/s mp3, 320kb/s mp3, wav

Track listing:
MP3 Sample - Insidious
MP3 Sample - Phill Is Walking Outside And He's Thinking About Her Lips
MP3 Sample - Tomorrow (Manta remix)
MP3 Sample - Integral (Pwp remix)
MP3 Sample - Jungle Spargle (radio version)
MP3 Sample - Dialyse
MP3 Sample - Craving (Liquid Hands remix)
MP3 Sample - Fkn Happy (Manta remix)
MP3 Sample - Livin (feat Micah Brynes)
MP3 Sample - Not Turning Around
MP3 Sample - Stop Our World (Rebekka Tsukava remix)
MP3 Sample - Common Anxieties (Silence Groove remix)
MP3 Sample - Barrel Bayou
MP3 Sample - R+100010
MP3 Sample - Radiate
MP3 Sample - Into The Unknown
MP3 Sample - Trucelent
MP3 Sample - Spread Out And Scatter
MP3 Sample - Amnesia (feat MC Dart - Mindset remix)
MP3 Sample - In This Moment (Muffler remix)
MP3 Sample - Surprise, Motherfucker
MP3 Sample - Rock It (Liquid Funk mix)
MP3 Sample - Slow Motion
MP3 Sample - Big Up
MP3 Sample - The Drill Bra
MP3 Sample - I Don't Care
MP3 Sample - Engineers (Drum And Nasa)
MP3 Sample - Robots Rules The Cables
MP3 Sample - Light-Emitting Diode
MP3 Sample - Sometimes
MP3 Sample - Edge
MP3 Sample - The Drum & Bass Train
MP3 Sample - Hoe
MP3 Sample - U & I
MP3 Sample - Pravda's Kiss
MP3 Sample - Elevate
MP3 Sample - Replicant
MP3 Sample - Surges Of Emotion
MP3 Sample - Bounce
MP3 Sample - Raw Business
MP3 Sample - My Deepest Fear
MP3 Sample - Think Shuffle
MP3 Sample - How Far
MP3 Sample - She Calls Late
MP3 Sample - Nazgul In Aqua
MP3 Sample - In Your Arms (extended mix)
MP3 Sample - The Messenger
MP3 Sample - I Love Underground (feat MAL6N - DnB mix)
MP3 Sample - Game Dem Play (Redda Fella mix)
MP3 Sample - Kick The Back
MP3 Sample - MIlestone
MP3 Sample - Hang N Bass
MP3 Sample - Zion
MP3 Sample - When A Charr On Dope Tries To Fight
MP3 Sample - Dusty Road
MP3 Sample - Heartbeat
MP3 Sample - All Night Long (radio mix)
MP3 Sample - Dead End
MP3 Sample - Banged
MP3 Sample - Save The Moment
MP3 Sample - The Delfin Drums
MP3 Sample - Give Enough
MP3 Sample - Ungreatful (Funkanizer remix)
MP3 Sample - Rub A Dub Party (The Niceguys remix)
MP3 Sample - Quello Che Sono (Nicenine remix)
MP3 Sample - Terra Amara (instrumental version)
MP3 Sample - Treasure (Kanine remix)
MP3 Sample - Shakti Excess (Dorian remix)
MP3 Sample - Night Is Comming
MP3 Sample - Time Doesn't Wait (feat UK Apache - Red Handed Dnb remix)
MP3 Sample - Go To Hell
MP3 Sample - Diversity Hooligan
MP3 Sample - Eucalyptus
MP3 Sample - Drum Is Out Of Control
MP3 Sample - Daydream
MP3 Sample - Turn It Out
MP3 Sample - Rootkit
MP3 Sample - Koil
MP3 Sample - Stick' Um
MP3 Sample - Welcome

          Home-Based Satellite TV Technician/Installer - DISH Network - Apache, OK      Cache   Translate Page   Web Page Cache   
Must possess a valid driver's license in the State you are seeking employment in, with a driving record that meets DISH's minimum safety standard.... $15 an hour
From DISH - Mon, 09 Jul 2018 19:17:49 GMT - View all Apache, OK jobs
          Personal Care Aide - May's Plus, Inc. - Apache, OK      Cache   Translate Page   Web Page Cache   
Has a telephone and dependable transportation, valid driver’s license and liability insurance. Provides assistance with non-technical activities of daily living...
From May's Plus, Inc. - Tue, 17 Apr 2018 14:05:28 GMT - View all Apache, OK jobs
          Los 157 emojis nuevos que llegarán con la próxima versión de Android      Cache   Translate Page   Web Page Cache   

Android 9 Pie es la nueva versión del sistema operativo de Google para móviles. Aunque todavía está en fase de desarrollo y llegará a los dispositivos en los próximos meses, ya se conocieron algunas de las novedades que vendrán con esta actualización. Entre ellas se destaca la optimización de la duración de la batería, las herramientas de control de uso de las aplicaciones, un temporizador que avisa si uno pasó mucho tiempo con una aplicación y –lo más esperado- una serie de más de 150 nuevos emojis.

Los iconos que suman nuevas personas, animales, comidas y deportes fueron aprobados por el Consorcio Unicode, la organización sin fines de lucro encargada de estandarizar un sistema de caracteres en computadoras y dispositivos.

Ya podemos ir imaginando como se podrá enriquecer las conversaciones en aplicaciones como WhatsApp con emojis con personas con cabello afro, pelirrojas, albinas o calvas. Así como con diferentes partes del cuerpo como pies, piernas, dientes y huesos. Además habrá superhéroes y supervillanos.

Entre los animales llegarán langostas, llamas, cisnes, hipopótamos y los sugerentes loros y mapaches. Asimismo, habrá más platos de comida y postres, plantas y accesorios (escobas, hilos para coser, imanes y hasta papel higiénico).

En el siguiente video se puede ver todos los nuevos emojis que se podrán utilizar antes de fin de año. No está el de mate todavía, ese ya está preaprobado para el 2019, como puede verse en el sitio de Unicode.







          How to Install PHP on Windows      Cache   Translate Page   Web Page Cache   

We've previously shown you how to get a working local installation of Apache on your Windows PC. In this article, we'll show how to install PHP 5 as an Apache 2.2 module.

Why PHP?

PHP remains the most widespread and popular server-side programming language on the web. It is installed by most web hosts, has a simple learning curve, close ties with the MySQL database, and an excellent collection of libraries to cut your development time. PHP may not be perfect, but it should certainly be considered for your next web application. Both Yahoo and Facebook use it with great success.

Why Install PHP Locally?

Installing PHP on your development PC allows you to safely create and test a web application without affecting the data or systems on your live website. This article describes PHP installation as a module within the Windows version of Apache 2.2. Mac and Linux users will probably have it installed already.

All-in-One packages

There are some excellent all-in-one Windows distributions that contain Apache, PHP, MySQL and other applications in a single installation file, e.g. XAMPP (including a Mac version), WampServer and Web.Developer. There is nothing wrong with using these packages, although manually installing Apache and PHP will help you learn more about the system and its configuration options.

The PHP Installer

Although an installer is available from php.net, I would recommend the manual installation if you already have a web server configured and running.

The post How to Install PHP on Windows appeared first on SitePoint.


          Distributed Dialogues: Blockchain’s Better Side      Cache   Translate Page   Web Page Cache   
Distributed Dialogues: Blockchain’s Better Side

The fact that great responsibility accompanies great power has become crystal clear in the blockchain world. While blockchains are most commonly connected with commerce, the potential impact of distributed ledgers is being discovered in fresh sectors daily.

In the most recent episode of the Distributed Dialogues podcast, a collaborative show between the Let’s Talk Bitcoin Network and Distributed Magazine, blockchain’s better side was on display. The show explored three different perspectives on how the technology is being used, not just to raise crypto value, but to help humanity rise up.

Blockchains for Human Rights

Alex Gladstein, chief strategy officer at the Human Rights Foundation (HRF), explained that organization’s optimism about blockchain technology. HRF is a nonpartisan, nonprofit organization that promotes and protects human rights globally, with a focus on closed societies.

According to Gladstein in his interview with the show’s co-host Rick Lewis, about 90 countries, with a total population of about 4 billion people, currently lack the checks and balances that a more open society would have.

Gladstein believes that decentralized models such as blockchains and cryptocurrencies can make a world of difference for this large population whose rights are routinely violated. It’s part of a nascent field he calls “demtech,” short for “democracy tech,” and its development comes with an unexpected bonus.

“Demtech would be getting power back in the hands of the people,” he said. “It’s not really out there yet … but it’s an opportunity, and what’s cool is you can probably make a lot of money in this space. When you talk about decentralized money networks, decentralized VPNs, censorship-resistant money and communications, I think there’s going to be huge demand for that …There’s tremendous opportunity to both impact the planet and make a lot of money, which is kind of a first for the human rights space.”

Brian Behlendorf on Governance

Brian Behlendorf is the executive director of Hyperledger, the umbrella project of open-source blockchains which is striving to support collaborative development for blockchain technology. As a primary developer of the Apache Web server, Behlendorf’s influence has spanned the web for decades.

His role as a founder of the Apache Software Foundation has also established him as a long-time advocate of the open-software community. Behlendorf strikes a balance between the responsibilities that should be designated to machine and to man, in his interview with Distributed Dialogues co-host Dave Hollerith.

“We can’t give up the need to find ways, as humans, to make decisions together,” Behlendorf pointed out. “And so, I think the more of governance, the more of business processes that we can make algorithmic and auditable using blockchain technology, in addition to lots of others, the better off we’ll be, because the more fair, potentially, we’ll have the application of those rules to society.

“But we still need human governance at the end of the day,” he continued, “and even the public blockchain ledgers have that in the form of the leaders of those projects, and the developers and the miners, who collectively make a decision, ‘Let’s bail out the DAO, but let’s not bail out the Parity Wallet hack victims.’ So these things happen, right? These human governance mechanisms happen. We can either embrace that and find ways to do that right or pretend that doesn’t exist and end up with Lord of the Flies.”

Flux

Blake Burris and Kylen McClintock of Flux, a new protocol for facilitating environmental data, spoke with segment host Tatiana Moroz. Flux is a self-described “proof of impact” play which dedicates 10 percent of its allocations to impact projects to scale the protocol.

According to the Flux website, it is deploying a sensor data network targeted at improving marketplaces and supply chains for agriculture, livestock and aquaculture. Its success, or proof of impact, will be measured by its ability to create partnerships that end desertification, stabilize crisis zones, integrate with micro-finance programs and help farmers increase their profitability.

Here, blockchains prove beneficial, courtesy of the Flux token (currently in pre-sale). “The token really comes in to incentivize data contribution,” McClintock explained

“Currently there’s expert growers around the world, or organizations that have specific data in a certain realm like carbon data, methane data, satellite imagery data, but right now there’s not a global standard way to contribute to that and get rewarded for that contribution. [It’s] another way of actually creating a custom perception engine, basically a custom machine learning model to be able to take the relevant data capsules that an organization, or government or academic research needs to find those insights.”

“It’s really about those insights that can be derived from that mass data set, and paying on a pro rata basis back to those who contributed that data,” added Burris.

This article originally appeared on Bitcoin Magazine.


          Apache and Kayne Anderson Create $3.5B Pipeline Company in the Permian      Cache   Translate Page   Web Page Cache   
Apache is contributing its midstream assets to form a new pipeline company with investment firm Kayne Anderson.
          Prometheus Milestone      Cache   Translate Page   Web Page Cache   

read more


          Administrador de Sistemas Linux/ Windows - Gfi Informática - Barcelona, España      Cache   Translate Page   Web Page Cache   
Buscamos ampliar nuestro equipo de IS interno para desarrollar proyectos bancarios en nuestras oficinas de Barcelona con Administradores/Técnicos de Sistemas Linux/Windows. Requisitos: Experiencia previa de 5 años en el servicio a prestar. Eficiencia y proactividad bajo presión. Experiencia administrando Sistemas Red Hat Enterprise (V6, V7) Experiencia administrando Servidores de Aplicaciones Weblogic, Apache. Conocimientos avanzados scripting Powershell. Conocimientos...
          Técnico Sistemas Weblogic - Ibermática - Madrid, España      Cache   Translate Page   Web Page Cache   
Seleccionamos Técnico de Sistemas especializado en el mantenimiento y administración de entornos web con experiencia en Oracle Weblogic 10 y 12g, Tuxedo y Apache. El profesional seleccionado realizará las siguientes tareas: Apoyo al Diseño de Lógico y Físico la infraestructura. Revisión del código en buscar incompatibilidades con JVM 1.8 y realizar propuestas de mejora. Migración de la pila tecnología. Pruebas de Carga / Tunning de la plataforma Java. Generación de Templates:...
          Técnico de Sistemas Linux - Importante empresa de Consultoría en IT - Madrid, España      Cache   Translate Page   Web Page Cache   
Importante empresa IT, selecciona un Técnico. Requisitos: Con experiencia en Linux, Experiencia en mantenimiento de plataformas Apache, Tomcat, Jboss y experiencia endespliegue de aplicaciones en estas plataformas y monitorización de aplicaciones Java. Lugar de trabajo: Madrid.
          Analyste de la sécurité web - Bell - Ottawa, ON      Cache   Translate Page   Web Page Cache   
Bonne compréhension et connaissance pratique des technologies de serveur Web (IIS, Apache). Code de demande:....
From Bell Canada - Fri, 10 Aug 2018 19:45:12 GMT - View all Ottawa, ON jobs
          Intermediate Designer - The Economical Insurance Group - Kitchener, ON      Cache   Translate Page   Web Page Cache   
Designers may have a specialization or area of focus (e.g., Digital, host, Java, Apache Hadoop, Web Services and Test Automation)....
From The Economical Insurance Group - Fri, 20 Jul 2018 20:25:45 GMT - View all Kitchener, ON jobs
          Lead technique PHP/Drupal - Nurun Services Conseils - Montréal, QC      Cache   Translate Page   Web Page Cache   
PHP, Drupal (7 &amp; 8), Laravel, Symfony, PHPunit, GIT, JSON, AJAX, MYSQL, Docker, Vagrant, Ansible, Composer, Kubernetes, Nexus, Jenkins, Apache, Nginx, SonarQube...
From Nurun Services Conseils - Fri, 08 Jun 2018 20:14:03 GMT - View all Montréal, QC jobs
          Store the system logs in MariaDB      Cache   Translate Page   Web Page Cache   

I’ve used Elasticsearch on OpenBSD to store my system logs for quite long now. And if it does the job, there are a few things I don’t like so much with it.

I only used a single instance so I was warned about availability. But a sudden power outage had severe impact on my daily data. Way much more than what I expected from a Production-ready software. Rebuilding and re-indexing the data was a real pain in the ass. From time to time, I also get errors about indexing that seem to go away without doing nothing.

The latter is probably due to my low memory server. But I want to store logs for only a couple of boxes. And I don’t want to reserve 4GB of RAM just for this. This “gimme more RAM” manner really annoys me. And as I also need RAM for Logstash (to parse the data and send them to Elasticsearch), this leads to way too much resources consumption.

That said, I decided to test another way for storing the logs : using a SGBD, namely MariaDB. I already have one running smooth. And I read Grafana was able to read data from it using SQL commands.

How it’ll work
Store the system logs in MariaDB

The stock syslogd(8) will be configured to send everything it gets to a local (or remote) syslog-ng daemon. The latter will parse, filter, format and store the logs into a (remote) mysql / MariaDB instance.

Prepare the SGBD

I’m usingmariadb-server-10.0.34v1 on OpenBSD 6.3/amd64.

First of all, I want to be able to compress the (text) data from the logs. So I had to enable a few InnoDB related options.

# vi /etc/my.cnf
(...)
innodb_file_per_table = 1
innodb_file_format = barracuda
innodb_strict_mode = 1
(...)
# rcctl restart mysqld

Then, I simply created a database and the credentials that’d be used by syslog-ng.

# mysql -u root -p
(...)
> CREATE database logsink;
> GRANT ALL PRIVILEGES ON logsink.* TO 'syslog-ng'@'%' IDENTIFIED BY 'changeme';
> FLUSH PRIVILEGES; Install and configure Syslog-NG

There are drivers required by syslog-ng to store data into mysql.

# pkg_add syslog-ng libdbi-drivers-mysql

Syslog-NG will listen on all interfaces, UDP and TCP ports. This way, any other box can send its logs to him.

# vi /etc/syslog-ng/syslog-ng.conf
(...)
source s_net {
udp(port(8514));
tcp(port(8514));
};
(...)
destination d_mysql_compressed {
sql(
type(mysql)
host("127.0.0.1") username("syslog-ng") password("changeme")
database("logsink")
table("_all")
create-statement-append(ROW_FORMAT=COMPRESSED)
columns(
"seq bigint(20) unsigned NOT NULL AUTO_INCREMENT PRIMARY KEY",
"unixtime bigint NOT NULL",
"facility varchar(16)",
"priority varchar(16)",
"level varchar(16)",
"host varchar(64) NOT NULL",
"program varchar(64) NOT NULL",
"pid smallint",
"message text",
"tag varchar(32)"
)
values(
"", "${UNIXTIME}", "$FACILITY_NUM", "$PRIORITY", "$LEVEL_NUM",
"${HOST}", "$PROGRAM", "${PID}", "${MSGONLY}", "$TAG"
)
indexes("unixtime", "host", "program", "tag")
null("")
);
};
(...)
log { source(s_net); filter(f_all); destination(d_mysql_compressed); };
(...)
# rcctl enable syslog_ng
# rcctl start syslog_ng

When this is done, configure syslogd(8).

# vi /etc/syslog.conf
(...)
*.* @127.0.0.1:8514
# rcctl restart syslogd Explore the logs

From here, the logs should be stored in MariaDB / MySQL.

A first look at the tables shows “COMPRESSED” is better than the standard storage ; regarding disk usage.

> SELECT TABLE_NAME,ENGINE,ROW_FORMAT,TABLE_ROWS,DATA_LENGTH,INDEX_LENGTH,DATA_FREE FROM information_schema.tables WHERE table_schema='logsink';
+----------------+--------+------------+------------+-------------+--------------+-----------+
| TABLE_NAME | ENGINE | ROW_FORMAT | TABLE_ROWS | DATA_LENGTH | INDEX_LENGTH | DATA_FREE |
+----------------+--------+------------+------------+-------------+--------------+-----------+
| _all | InnoDB | Compressed | 92409 | 6561792 | 5275648 | 2097152 |
| _all_compact | InnoDB | Compact | 93643 | 14172160 | 10551296 | 7340032 |
+----------------+--------+------------+------------+-------------+--------------+-----------+

From the filesystem POV, the gain is also clearly visible.

-rw-rw---- 1 _mysql _mysql 3.3K Aug 3 16:22 _all.frm
-rw-rw---- 1 _mysql _mysql 16.0M Aug 6 15:52 _all.ibd
-rw-rw---- 1 _mysql _mysql 3.3K Aug 3 16:22 _all_compact.frm
-rw-rw---- 1 _mysql _mysql 36.0M Aug 6 15:52 _all_compact.ibd

Have a look at the most verbose programs is just a matter of writing SQL sentence:

> SELECT program, COUNT(program) AS messages FROM _all GROUP BY program ORDER BY messages DESC;
+---------------------+----------+
| program | messages |
+---------------------+----------+
| monit | 47596 |
| smtpd | 19689 |
| rspamd | 12884 |
| doas | 4546 |
| collectd | 4265 |
| sshd | 3018 |
| cron | 2545 |
(...)

The logs can be accessed and rendered by Grafana. A simple query can print the last logs. Add alerting when some value appear and you have a nice event-based monitoring tool.

Organize storage

There are logs that I don’t want to store. And there are some that I want to store in a specific table. This can be done in Syslog-NG using filters.

filter f_all {
not program("fetchmail");
and not program("monit");
and not filter(f_unbound);
and not filter(f_apache);
};

This will not send messages from fetchmail or monit to the compressed table. Nor will it send messages that match the f_unbound and f_apache filters. Those two guys are used to store messages in a specific table with a dedicated schema. I’ll probably write about the details some day…

Now… send all your logs to Syslog-NG rather than Logstash. Count to 10 and get your RAM back! So far, MariaDB seem to handle it pretty well.




Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10