Next Page: 10000

          Presco Plc Job Recruitment (3 Positions)      Cache   Translate Page      

Presco Plc is a fully-integrated agro-industrial establishment with oil palm plantations, palm oil mill, palm kernel crushing plant and vegetable oil refining and fractionation plant.

Applications are invited from suitably qualified candidates to fill the following positions below:

1.)andnbsp;Entomologist

Location: E


          Entomologist at Presco Plc      Cache   Translate Page      

Presco Plc is a fully-integrated agro-industrial establishment with oil palm plantations, palm oil mill, palm kernel crushing plant and vegetable oil refining and fractionation plant.

Applications are invited from suitably qualified candidate to fill the position below:

Job Title:andnbsp;Entomologist

Location:andnbsp;

          Chief Security Officer at Presco Plc      Cache   Translate Page      

Presco Plc is a fully-integrated agro-industrial establishment with oil palm plantations, palm oil mill, palm kernel crushing plant and vegetable oil refining and fractionation plant.

Applications are invited from suitably qualified candidate to fill the position below:

Job Title:andnbsp;Chief Security Officer

Location:andnbs


          Human Resources Manager at Presco Plc      Cache   Translate Page      

Presco is a fully-integrated agro-industrial establishment with oil palm plantations, palm oil mill, palm kernel crushing plant and vegetable oil refining and fractionation plant. It also has an olein and stearin packaging plant and a biogas plant to treat its palm oil mill effluent. It is the first of its kind in West Africa.

We are recruiting to fill the position below:


          webform_schedule_email.install      Cache   Translate Page      

Problem/Motivation

After updating webform, I was trying to run drush updb and got several errors regarding webform_scheduled_email.install:

•	Warning: include_once(C:\Apache24\html\web\modules\contrib\webform\modules\webform_scheduled_emailincludes/webform.install.inc): failed to open stream: No such file or directory in include_once() (line 12 of modules\contrib\webform\modules\webform_scheduled_email\webform_scheduled_email.install). 
•	include_once() (Line: 12)
•	require_once('C:\Apache24\html\web\modules\contrib\webform\modules\webform_scheduled_email\webform_scheduled_email.install') (Line: 136)
•	module_load_include('install', 'webform_scheduled_email') (Line: 93)
•	module_load_install('webform_scheduled_email') (Line: 82)
•	drupal_load_updates() (Line: 146)
•	Drupal\system\Controller\DbUpdateController->handle('selection', Object)
•	call_user_func_array(Array, Array) (Line: 112)
•	Drupal\Core\Update\UpdateKernel->handleRaw(Object) (Line: 73)
•	Drupal\Core\Update\UpdateKernel->handle(Object) (Line: 28)
•	Warning: include_once(): Failed opening 'C:\Apache24\html\web\modules\contrib\webform\modules\webform_scheduled_emailincludes/webform.install.inc' for inclusion (include_path='.;C:\php\pear') in include_once() (line 12 of modules\contrib\webform\modules\webform_scheduled_email\webform_scheduled_email.install). 

The problem seems to be this, at lines 11/12:

$WEBFORM_ROOT = str_replace('/modules/webform_scheduled_email', '/', __DIR__);
include_once $WEBFORM_ROOT . 'includes/webform.install.inc';

it is not traversing back up the path to the actual webform root.

Proposed resolution

Replacing line 12 with the following works, but I'm not sure of the preferred way to make the correction:

include_once __DIR__ . '/../../includes/webform.install.inc';


          What’s for Dinner #420 (October 2018) - Fall      Cache   Translate Page      
Soba (brown rice for me) noodles with lime, cardamom and avocado - my first recipe from Ottolenghi, Simple. (It's been killing me that I bought the book just before I went on holiday!) This was very delicious indeed, not in an immediate, obvious kind of way but the more I ate it, the more I realised how yummy it was! Some alterations - as mentioned above, I used brown rice noodles instead of soba, roasted salted pistachio kernels (and just omitted the salt in the recipe) and I added a bit of peanut butter too as it seemed like it was crying out for some satay goodness. All tasted good, and could probably go even further with something like a fried egg or crispy tofu (he does mention this in the description). It did take me a bit longer than his advertised 30 minutes, probably more like 40, but I was tired and (ironically) in a rush after my evening yoga class. Normal energy and time levels, this could probably be done in about 20/25 minutes. Can't wait to try out more from this book, first recipe definitely a success!
          Como Instalar os Emojis Do Galaxy S9 Em Qualquer Android      Cache   Translate Page      

Como Instalar os Emojis Do Galaxy S9 Em Qualquer Android Como Instalar os Emojis Do Galaxy S9 Em Qualquer Android BAIXE O ARQUIVO AQUI INFORMAÇÕES  O Magisk substituiu o SuperSU em muitos dispositivos Android! O Android é um sistema que tem seu kernel baseado no Linux, sendo então um sistema com código aberto. … O sistema, chamado […]

O post Como Instalar os Emojis Do Galaxy S9 Em Qualquer Android apareceu primeiro em Bob tutorias.


          Fuzzing技术总结与工具列表      Cache   Translate Page      
版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/wcventure/article/details/82085251

首先推荐阅读2018年computing Surveys 的《Fuzzing: Art, Science, and Engineering》
https://github.com/wcventure/wcventure/blob/master/Paper/Fuzzing_Art_Science_and_Engineering.pdf
其次推荐阅读2018年Cybersecurity 的 《Fuzzing: a survey》
https://www.researchgate.net/publication/325577316_Fuzzing_a_survey
里面对fuzzing技术和fuzzing工具有详细的介绍。

一、什么是Fuzzing?

Fuzz本意是“羽毛、细小的毛发、使模糊、变得模糊”,后来用在软件测试领域,中文一般指“模糊测试”,英文有的叫“Fuzzing”,有的叫“Fuzz Testing”。本文用fuzzing表示模糊测试。

Fuzzing技术可以追溯到1950年,当时计算机的数据主要保存在打孔卡片上,计算机程序读取这些卡片的数据进行计算和输出。如果碰到一些垃圾卡片或一些废弃不适配的卡片,对应的计算机程序就可能产生错误和异常甚至崩溃,这样,Bug就产生了。所以,Fuzzing技术并不是什么新鲜技术,而是随着计算机的产生一起产生的古老的测试技术。

Fuzzing技术是一种基于黑盒(或灰盒)的测试技术,通过自动化生成并执行大量的随机测试用例来发现产品或协议的未知漏洞。随着计算机的发展,Fuzzing技术也在不断发展。

二、Fuzzing有用么?

Fuzzing是模糊测试,顾名思义,意味着测试用例是不确定的、模糊的。

计算机是精确的科学和技术,测试技术应该也是一样的,有什么的输入,对应什么样的输出,都应该是明确的,怎么会有模糊不确定的用例呢?这些不确定的测试用例具体会有什么作用呢?

为什么会有不确定的测试用例,我想主要的原因是下面几点:

1、我们无法穷举所有的输入作为测试用例。我们编写测试用例的时候,一般考虑正向测试、反向测试、边界值、超长、超短等一些常见的场景,但我们是没有办法把所有的输入都遍历进行测试的。

2、我们无法想到所有可能的异常场景。由于人类脑力的限制,我们没有办法想到所有可能的异常组合,尤其是现在的软件越来越多的依赖操作系统、中间件、第三方组件,这些系统里的bug或者组合后形成的bug,是我们某个项目组的开发人员、测试人员无法预知的。

3、Fuzzing软件也同样无法遍历所有的异常场景。随着现在软件越来越复杂,可选的输入可以认为有无限个组合,所以即使是使用软件来遍历也是不可能实现的,否则你的版本可能就永远也发布不了。Fuzzing技术本质是依靠随机函数生成随机测试用例来进行测试验证,所以是不确定的。

这些不确定的测试用例会起到我们想要的测试结果么?能发现真正的Bug么?

1、Fuzzing技术首先是一种自动化技术,即软件自动执行相对随机的测试用例。因为是依靠计算机软件自动执行,所以测试效率相对人来讲远远高出几个数量级。比如,一个优秀的测试人员,一天能执行的测试用例数量最多也就是几十个,很难达到100个。而Fuzzing工具可能几分钟就可以轻松执行上百个测试用例。

2、Fuzzing技术本质是依赖随机函数生成随机测试用例,随机性意味着不重复、不可预测,可能有意想不到的输入和结果。

3、根据概率论里面的“大数定律”,只要我们重复的次数够多、随机性够强,那些概率极低的偶然事件就必然会出现。Fuzzing技术就是大数定律的典范应用,足够多的测试用例和随机性,就可以让那些隐藏的很深很难出现的Bug成为必然现象。

目前,Fuzzing技术已经是软件测试、漏洞挖掘领域的最有效的手段之一。Fuzzing技术特别适合用于发现0 Day漏洞,也是众多黑客或黑帽子发现软件漏洞的首选技术。Fuzzing虽然不能直接达到入侵的效果,但是Fuzzing非常容易找到软件或系统的漏洞,以此为突破口深入分析,就更容易找到入侵路径,这就是黑客喜欢Fuzzing技术的原因。

三、基于生成和基于编译的Fuzzing算法?*

Fuzzing引擎算法中,测试用例的生成方式主要有2种:
1)基于变异:根据已知数据样本通过变异的方法生成新的测试用例;
2)基于生成:根据已知的协议或接口规范进行建模,生成测试用例;
一般Fuzzing工具中,都会综合使用这两种生成方式。

基于变异的算法核心要求是学习已有的数据模型,基于已有数据及对数据的分析,再生成随机数据做为测试用例。

四、state-of-the-art AFL

AFL就是著名的基于变异的Fuzzer。
以下有一些关于state-of-the-art AFL的资料

  1. american fuzzy lop (2.52b)
    http://lcamtuf.coredump.cx/afl/
  2. AFL内部实现细节小记
    http://rk700.github.io/2017/12/28/afl-internals/
  3. afl-fuzz技术白皮书
    https://blog.csdn.net/gengzhikui1992/article/details/50844857
  4. 如何使用AFL进行一次完整的fuzz过程
    https://blog.csdn.net/abcdyzhang/article/details/53487683
  5. AFL(American Fuzzy Lop)实现细节与文件变异
    https://paper.seebug.org/496/
  6. fuzz实战之libfuzzer
    https://www.secpulse.com/archives/71898.html

1

- Static analysis
- Dynamic analysis
- Symbolic execution
- Fuzzing

T1

- Generation-based Fuzzing
- Mutation-based Fuzzing

T2

- White box fuzzing
- Grey box fuzzing
- Black box fuzzing

T3

- Fuzzing技术中的关键

T4

- Fuzzing 中

T5

- 至今fuzzing工具文献的引用关系,Fuzzing工具的分类和历史

F1

- Fuzzing 工具之调研,还有一张很好的整理后的图表

F2

最后,再整理一下部分开源fuzzing工具的列表
原文来自:[https://www.peerlyst.com/posts/resource-open-source-fuzzers-list],并增加2018年最新的诸如CollAFL和SnowFuzz等工具
1.开源Fuzzers工具
2.Fuzzing的线束或框架
3.其它 Fuzzers 工具是免费的,但是和开源比不值得一提
4.Fuzzing的有效超载
5.博客将帮助你更好的了解Fuzz
6.其它关于Fuzzing博客或资源
7.商业Fuzzers工具

1.开源Fuzzers

CollAFLhttp://chao.100871.net/papers/oakland18.pdf
路径敏感的Fuzzer,解决了AFL中bitmap路径冲突的问题。
并提出了一种选择seed的策略,能更快提高覆盖率。

SnowFuzz
https://arxiv.org/pdf/1708.08437.pdf

VUzzer
http://www.cs.vu.nl//~giuffrida/papers/vuzzer-ndss-2017.pdf
基于应用感知的自进化模糊工具。在这篇文章中,我们提出一个应用感知的进化模糊策略(不需要以前的知识应用或格式输入)。为了最小化地覆盖并扩展更深的路径,我们利用基于静态和动态分析的控制以及数据流功能,来推断应用程序的基本属性。与Application-agnostic方法相比,这可以更快地生成有趣的输入。我们实行我们的模糊策略在VUzzer上,并且用三种不同的数据评估它:DARPA的大挑战二进制文件(CGC)、一组真实的应用程序(二进制输入解析器)和最近发布的LAVA数据集。

Afl-fuzz(American fuzzy lop)
http://lcamtuf.coredump.cx/afl/
Afl-fuzz是一种基于面向安全的模糊测试工具,它采用了一种新型的方式(编译时检测和遗传算法),来自动发掘干净的、有趣的测试案例,即在目标二进制中触发新的内部状态。这基本上改善了模糊代码的功能覆盖。该工具生成的简洁的合成语料库也可以用来传播其它更多的劳动型或资源密集型测试方案。
与其他仪器化的模糊工具相比,afl-fuzz是以实用性而被设计的:它具有适度的性能开销,采用了多种高效的模糊战略,和努力最小化的技巧,基本上不需要配置,并且能够无缝处理复杂的、真实世界案例,以及常见的图像分析或文件压缩等。

Filebuster
一个非常快速和灵活的网络模糊工具

TriforceAFL
AFL / QEMU 模糊器具有全系统的仿真。这是AFL的修补版本,支持使用QEMU的全系统模糊测试。它所包含的QEMU已经更新,允许在运行x86_64的系统仿真器时进行分支机构跟踪。它也添加了额外的指令来启动AFL的forkserver,进行模糊设置,并标记测试用例的启动和停止。

Nightmare:
https://github.com/joxeankoret/nightmare
一个具有web管理的分布式模糊测试套件。

Grr
DECREE二进制的高吞吐量模糊器和仿真器

Randy:
http://ptrace-security.com/blog/randy-random-based-fuzzer-in-python/
Python中的基于随机的模糊工具

IFuzzer
一个进化型的翻译模糊器

Dizzy:
https://github.com/ernw/dizzy
基于python的模糊框架:
1.可以发送到L2以及上层(TCP / UDP / SCTP)
2.能够处理奇长度分组字段(无需匹配字节边界,因此即使单个标志或7位长字3.也可以表示和模糊)
4.非常容易的协议定义语法
5.能够做多包状态的完全模糊,能够使用接收到的目标数据作为响应

Address Sanitizer:
https://github.com/Google/sanitizers
地址Sanitizer、线Sanitizer、记忆Sanitizer

Diffy:
https://github.com/twitter/diffy
使用Diffy查找您的服务中的潜在错误

Wfuzz:
https://github.com/xmendez/wfuzz
Web应用程序HTTP://www.edge-security.com/wfuzz.php

Go-fuzz:
https://github.com/Google/gofuzz
基于放弃的模糊测试

Sulley:
https://github.com/OpenRCE/sulley
Sulley是一个积极开发的模糊引擎和模糊测试框架,由多个可扩展组件组成。Sulley(IMHO)超过了此前公布的大所属模糊技术、商业和公共领域的能力。框架的目标是不仅是可以简化数据表示,而且也可以简化数据传输和仪表。Sulley是以 Monsters Inc.的生物来命名的,因为,他是模糊的。写在python内的。

Sulley_l2:
http://ernw.de/download/sulley_l2.tar.bz2
有些人可能记得2008年发布的sulley_l2,它是sulley模糊框架的修改版本,增强了第2层发送功能和一堆(L2)模糊脚本。所有的blinking, rebooting, mem-corrupting引起了我们的一些关注。从那以后,我们继续写和使用这些模糊脚本,所以它的洞集合增长了。

CERT Basic Fuzzing Framework (BFF)For linux, OSX
https://github.com/CERTCC-Vulnerability-Analysis/certfuzz
http://www.cert.org/vulnerability-analysis/tools/bff.cfm
cert基本模糊框架(BFF)是一个软件测试工具,它用于在linux和mac os x平台上运行的应用程序中寻找漏洞。BFF对消耗文件输入的软件执行突变性的模糊测试。(突变性模糊测试是采取形式良好的输入数据并以各种方式破坏它的行为,寻找导致崩溃的情况。)BFF自动收集导致了软件以独特方式使测试用例崩溃,以及利用崩溃来调试信息。BFF的目标是去最小化软件供应商和安全研究人员通过模糊测试有效地发现和分析发现的安全漏洞过程中所需要的努力。

CERT Failure Observation Engine (FOE)For windows
http://www.cert.org/vulnerability-analysis/tools/foe.cfmhttps://github.com/CERTCC-Vulnerability-Analysis/certfuzz
The cert Failure Observation Engine (FOE) 是一个软件测试工具,它被用于在Windows平台上运行的应用程序中发现漏洞。FOE在消耗文件输入的软件上执行突变模糊测试。(突变性模糊测试是采取形式良好的输入数据并以各种方式破坏它的行为,寻找导致崩溃的情况。)FOE自动收集导致了软件以独特方式使测试用例崩溃,以及利用崩溃来调试信息。FOE的目标是去最小化软件供应商和安全研究人员通过模糊测试有效地发现和分析发现的安全漏洞过程中所需要的努力。

DranzerFor ActiveX Controls.
https://github.com/CERTCC-Vulnerability-Analysis/dranzer
Dranzer是一个工具,使用户能够检查有效的技术,它用于模糊测试ActiveX控件

Radamsaa general purpose fuzzer
https://github.com/aoh/radamsa
Radamsa是一个用于鲁棒性测试的测试用例生成器,也称为fuzzer。它可以用来测试一个程序是否可以承受格式错误以及潜在的恶意输入。它通过制造文件来工作(有趣的不同于通常给定的文件),然后将修改的文件提供给Target程序,或者这样或通过一些脚本。radamsa的主要卖点(而不是其他的模糊器)是:它是非常容易在大多数机器上运行,而且很容易从命令行脚本,这已经被用来找到程序中的一系列安全问题,而且你可能现在正在使用。

zzufApplication fuzzer
https://github.com/samhocevar/zzuf
zzuf是一个透明的应用程序输入模糊器。 它的工作原理是截取文件操作并更改程序输入中的随机位。zzuf的行为是确定性的,使得它很容易再现错误。 有关如何使用zzuf的说明和示例,请参阅手册页和网站http://caca.zoy.org/wiki/zzuf

Backfuzz
https://github.com/localh0t/backfuzz
Backfuzz是一个用python写成的有着不同协议(FTP,HTTP,IMAP等)的模糊工具。因为一般的想法是这个脚本有几个预定义的功能,所以谁想要编写自己的插件(为另一个协议)就可以在一些行这样做。

KEMUfuzzer
https://github.com/jrmuizel/kemufuzzer
KEmuFuzzer是一个基于仿真或直接本地执行测试系统虚拟机的工具。 目前KEmuFuzzer支持:BHOCS,QEMU,VMware和virtualbox。

Pathgrind
https://github.com/codelion/pathgrind
Pathgrind使用基于路径的动态分析来fuzz linux / unix二进制。 它是基于valgrind被写在python内的。

Wadi-fuzzer
https://www.sensepost.com/blog/2015/wadi-fuzzer/ https://gitlab.sensepost.com/saif/DOM-Fuzzer
Wadi是基于web浏览器语法的模糊器。 这个语法用于描述浏览器应该如何处理Web内容,Wadi转向并使用语法来打破浏览器。
Wadi是一个Fuzzing模块,用于NodeFuzz fuzzing Harness并利用AddressSanitizer(ASan)在Linux和Mac OSX上进行测试。
万维网联盟(W3C)是一个国际组织,它开发开放标准以确保Web的长期增长。 W3C允许我们搜索语法并在我们的测试用例中使用。

LibFuzzer, Clang-format-fuzzer, clang-fuzzer
http://llvm.org/docs/LibFuzzer.html
http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/clang-format/fuzzer/ClangFormatFuzzer.cpp?view=markup
http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/clang-fuzzer/ClangFuzzer.cpp?view=markup
我们在LibFuzzer上实现了两个模糊器:clang-format-fuzzer和clang-fuzzer。Clang格式大多是一个词法分析器,所以给它随机字节格式是会完美运行的,但也伴随着超过20个错误。然而Clang不仅仅是一个词法分析器,给它随机字节时几乎没有划伤其表面,所以除了测试随机字节,我们还在令牌感知模式中模糊了Clang。两种模式中都发现了错误; 其中一些以前被AFL检测到,另一些则不是:我们使用AddressSanitizer运行这个模糊器,结果发现一些错误在没有它的情况下不容易被发现。

Perf-fuzzer
http://www.eece.maine.edu/~vweaver/projects/perf_events/validation/https://github.com/deater/perf_event_testshttp://web.eece.maine.edu/~vweaver/projects/perf_events/fuzzer/
用于Linux perf_event子系统的测试套件

HTTP/2 Fuzzer
https://github.com/c0nrad/http2fuzz
HTTP2模糊器内置于Golang。

QuickFuzz
http://quickfuzz.org/
QuickFuzz是一个语法模糊器,由QuickCheck,模板Haskell和Hackage的特定库生成许多复杂的文件格式,如Jpeg,Png,Svg,Xml,Zip,Tar和更多! QuickFuzz是开源的(GPL3),它可以使用其他错误检测工具,如zzuf,radamsa,honggfuzz和valgrind。

SymFuzz
https://github.com/maurer/symfuzz
http://ieeexplore.IEEE.org/xpls/abs_all.jsp?arnumber=7163057
摘要?我们提出了一个算法的设计,以最大化数量的bug为黑盒子突变性的模糊给定一个程序和种子的输入。主要的直观性的是利用给定程序 - 种子对的执行轨迹上的白盒符号进行分析,来检测输入的BIT位置之间的依赖性,然后使用这种依赖关系来为该程序种子对计算概率上最佳的突变比率。我们的结果是有希望的:我们发现使用相同的模糊时间,这比8个应用程序中的三个以前的模糊器的平均错误多38.6%。

OFuzz
https://github.com/sangkilc/ofuzz
OFuzz是一个用OCaml编写的模糊平台。 OFuzz目前专注于在* nix平台上运行的文件处理应用程序。 OFuzz的主要设计原则是灵活性:必须容易添加/替换模糊组件(崩溃分类模块,测试用例生成器等)或算法(突变算法,调度算法)。

Bed
http://www.snake-basket.de/
网络协议fuzzer。 BED是一个程序,旨在检查守护程序的潜在缓冲区溢出、格式字符串等。

Neural Fuzzer
https://cifasis.github.io/neural-fuzzer/
神经模糊测试工具是一种实验性模糊器,它被设计使用国家最先进的机器,从一组初始文件学习。 它分为两个阶段:训练和生成。

Pulsar
https://github.com/hgascon/pulsar
协议学习,模拟和状态模糊器
Pulsar是一个具有自动协议学习和模拟能力的网络模糊器。该工具允许通过机器学习技术来建模协议,例如聚类和隐马尔可夫模型。这些模型可以用于模拟Pulsar与真实客户端或服务器之间进行通信,这些消息,在一系列模糊原语的结合下,让测试一个未知协议错误的实施在更深的状态协议。

D-bus fuzzer:
https://github.com/matusmarhefka/dfuzzer
dfuzzer是D-Bus模糊器,是用于通过D-Bus进行通信的模糊测试过程的工具。它可以用于测试连接到会话总线和系统总线守护程序的进程。模糊器为客户端工作,它首先连接到总线守护进程,然后它遍历并模糊测试由D-Bus服务提供的所有方法。

Choronzon
https://census-labs.com/news/2016/07/20/choronzon-public-release/
Choronzon是一个进化型的模糊工具。它试图模仿进化过程,以保持产生更好的结果。 为了实现这一点,它具有评估系统的能力,用以分类哪些模糊文件是有趣的,哪些应该被丢弃。
此外,Choronzon是一个基于知识的模糊器。 它使用用户定义的信息来读取和写入目标文件格式的文件。要熟悉Choronzon的术语,您应该考虑每个文件由染色体表示。用户应该描述所考虑的文件格式的基本结构, 优选文件格式的高级概述,而不是描述它的每个细节和方面。那些用户定义的基本结构中的每一个都被认为是基因, 每个染色体包含一个基因树,并且它能够从中构建相应的文件。

Exploitable
这里写图片描述
‘exploitable’是一个GDB扩展,它会按严重性分类Linux应用程序错误。扩展检查已崩溃的Linux应用程序的状态,并输出攻击者利用底层软件错误获得系统控制有多困难的总结。扩展可以用于为软件开发人员确定bug的优先级,以便他们可以首先解决最严重的bug。
该扩展实现了一个名为“exploitable”的GDB命令。 该命令使用启发式来描述当前在GDB中调试的应用程序的状态的可利用性。 该命令旨在用于包含GDB Python API的Linux平台和GDB版本。 请注意,此时命令将无法在核心文件目标上正确运行。

Hodor
这里写图片描述

我们想设计一个通用的模糊器,可以用来配置使用已知的良好的输入和分隔符,以模糊特定的位置。在一个完全愚钝的模糊器和一些更聪明的东西之间,与实现适当的智能模糊器相比,表现着更少的努力。

BrundleFuzz
https://github.com/carlosgprado/BrundleFuzz
BrundleFuzz是一个用于Windows和Linux的分布式模糊器,使用动态二进制仪器。

Netzob
https://www.netzob.org/
用于通信协议的逆向工程、流量生成和模糊化的开源工具
P
assiveFuzzFrameworkOSX
该框架用于在内核模式下基于被动内联挂钩机制来模糊OSX内核漏洞。

syntribos
OpenStack安全组的Python API安全测试工具

honggfuzz
http://google.github.io/honggfuzz/
一个通用的,易于使用的有趣的分析选项的模糊器。 支持基于代码覆盖率的反馈驱动的模糊测试

dotdotpwn
http://dotdotpwn.blogspot.com/
目录遍历模糊工具

KernelFuzzer
跨平台内核Fuzzer框架。DEF CON 24视频:
https://www.youtube.com/watch?v=M8ThCIfVXow

PyJFuzz
PyJFuzz - Python JSON Fuzzer
PyJFuzz是一个小的、可扩展的和现成可用的框架,用于模糊JSON输入,如移动端点REST API,JSON实现,浏览器,cli可执行和更多。

RamFuzz
单个方法参数的模糊器。

EMFFuzzer
基于桃树模糊框架的增强的元文件模糊器

js-fuzz
一个基于javascript的AFL启发的遗传模糊测试器。

syzkaller
syzkaller是一个无监督的、覆盖引导的Linux系统调用模糊器。

2.模糊线束/框架使fuzzer提高:

FuzzFlow
Fuzzflow是来自cisco talos的一个分布式的模糊管理框架,它提供虚拟机管理,模糊作业配、可插拔变异引擎、前/后变形脚本、崩溃收集和可插拔崩溃分析。

fuzzinator
Fuzzinator是一个模糊测试框架,可以帮助你自动化任务,它通常需要在一个fuzz会话:
运行您最喜欢的测试生成器并将测试用例馈送到测试中的系统,
抓住和保存独特的问题,
减少失败的测试用例,
缓解错误跟踪器中的问题报告(例如,Bugzilla或GitHub),
如果需要,定期更新SUT
计划多个SUT和发电机,而不会使工作站超载。

Fuzzlabs
https://github.com/DCNWS/FuzzLabs
FuzzLabs在一个模块化的模糊框架中,用Python编写。 它使用了令人惊叹的Sulley模糊框架的修改版本作为核心引擎。 FuzzLabs仍在开发中。

Nodefuzz
https://github.com/attekett/NodeFuzz
对于Linux和Mac OSX。 NodeFuzz是一个用于网络浏览器和类似浏览器的应用程序的模糊器。 NodeFuzz背后有两个主要的想法:第一是创建一个简单、快速、不同浏览器的fuzz方法。 第二,有一个线束,可以轻松地扩展与新的测试用例发生器和客户端仪器,无需修改核心。

Grinder
https://github.com/stephenfewer/grinder
对于windows
Grinder是一个自动化浏览器的模糊化和大量崩溃管理的系统。

Kitty
https://github.com/Cisco-sas/kitty
Kitty是一个开源的模块化和可扩展的模糊框架,使用python编写,灵感来自OpenRCE的Sulley和Michael Eddington(现在是Deja vu Security的)Peach Fuzzer。

Peach
http://community.peachfuzzer.com/
https://github.com/MozillaSecurity/peach
Peach是一个SmartFuzzer,能够执行基于生成和基于突变的模糊测试。

3.此外,还有这些免费的但不是开源的fuzzer:

SDL MiniFuzz File Fuzzer
https://www.Microsoft.com/en-us/download/details.aspx?id=21769
对于Windows。 SDL MiniFuzz File Fuzzer是一个基本的文件模糊工具,旨在简化非安全开发人员对模糊测试的采用,这些非安全开发人员不熟悉文件模糊工具或从未在当前的软件开发过程中使用它们。

Rfuzz
http://rfuzz.rubyforge.org/index.html
RFuzz是一个Ruby库,可以使用快速HttpClient和wicked vil RandomGenerator轻松地从外部测试Web应用程序,它允许普通程序员每天使用先进的模糊技术。

Spike
http://www.immunitysec.com/downloads/SPIKE2.9.tgz
SPIKE是一个API框架,允许你编写模糊器。

Regex Fuzzer
http://go.microsoft.com/?linkid=9751929
DL Regex Fuzzer是一个验证工具,用于帮助测试正则表达式是否存在潜在的拒绝服务漏洞。它包含用指数时间执行的某些子句的正则表达式模式(例如,包含自身重复的重复的子句)可以被攻击者利用来引起拒绝服务(DoS)条件。SDL Regex Fuzzer与SDL过程模板和MSF-Agile + SDL过程模板集成,以帮助用户跟踪和消除其项目中的任何检测到的正则表达式漏洞。

4.博客,将帮助你fuzz更好
Yawml的开始到完成模糊与AFL(一个完整的fuzzjob由foxglovesecurity)
http://foxglovesecurity.com/2016/03/15/fuzzing-workflows-a-fuzz-job-from-start-to-finish/

Fuzz更聪明,更难 - 用afl引发模糊,来自bsidessf2016的引物
https://www.peerlyst.com/posts/bsidessf-2016-recap-of-fuzz-smarter-not-harder-an-afl-primer-claus-cramon

Fuzzing和afl是一种艺术
Fuzzing nginx 和 American Fuzzy Lop
您可以在此处的评论或此Google文档中发表建议:
https://docs.google.com/document/d/17pZxfs8hXBCnhfHoKfJ7JteGziNB2V_VshsVxmNRx6U/edit?usp=sharing

BSidesLisbon 2016主题演讲:智能模糊器革命
Windows内核模糊初学者 - Ben Nagy

5.其他Fuzzer博客:
循环使用编译器转换的模糊包版
谷歌推出了OSS-Fuzz(感谢Dinko Cherkezov) - 一个项目,旨在不断开发开源项目fuzz:
OSS-Fuzz现在正在测试中,并即将接受候选开源项目的建议。为了使项目被OSS-Fuzz接受,它需要有一个庞大的用户基础或针对于至关重要的全球IT基础设施,这是一个通用启发式方法,我们有意在这个早期阶段解释。查看更多详情和说明如何在这里申请。
一旦项目注册了OSS-Fuzz,它将自动接收到我们的跟踪器中,新报告的错误披露截止于90天后(见此处的详细信息)。 这符合行业的最佳实践,并通过更快地为用户提供补丁来提高最终用户的安全性和稳定性。
帮助我们确保这个程序真正服务于开源社区和依赖这个关键软件的互联网,贡献和留下您的反馈在GitHub。

这里写图片描述

6.商业模糊器

超越安全的暴风雨
http://www.beyondsecurity.com/bestorm_and_the_SDL.html
管理员编辑:查找更多真棒Peerlyst社区贡献的资源,资源目录在这里
这里写图片描述

7.关于浏览器的Fuzzing

Skyfire 一种用于Fuzzing的数据驱动的种子生成工具
https://www.inforsec.org/wp/?p=2678
https://www.ieee-security.org/TC/SP2017/papers/42.pdf

使用libFuzzer fuzz Chrome V8入门指南
http://www.4hou.com/info/news/6191.html


          10-09-18 A kernel of culture      Cache   Translate Page      
Corn plays a variety of major roles in Native culture and is a key ingredient in many Native foods. It originated in Mexico and quickly became a staple across the Americas as Indigenous farmers and seed keepers conditioned the plant to live in deserts, grasslands and high mountains. Today, Indigenous strains of corn have a smaller presence, but there are efforts to revitalize traditional corn for the benefit of Native culture, economics and health.
          "Thermal Pressure" Kernel Feature Would Help Linux Performance When Running Hot      Cache   Translate Page      
Linaro engineer Thara Gopinath sent out an experimental set of kernel patches today that introduces the concept of "thermal pressure" to the Linux kernel for helping assist Linux performance when the processor cores are running hot...
          ROCm 1.9.1 Released With Vega 7nm DPM Support, Profiling Fix      Cache   Translate Page      
As a follow-up to the ROCm 1.9 release from a month ago that brought initial Vega 20 support, upstream kernel compatibility with the AMDKFD code, and other improvements, ROCm 1.9.1 was quietly released a few days ago...
          RedHat: RHSA-2018-2846:01 Important: kernel security and bug fix update      Cache   Translate Page      
LinuxSecurity.com: An update for kernel is now available for Red Hat Enterprise Linux 6. Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability
          Reddit: I plugged in a webcam, and it just worked      Cache   Translate Page      

I had never used a webcam on Linux before and considering the amount of trouble I have with my wifi, I was expecting this to be a whole thing. No extra packages to install, no random kernel modules compiled from some person's Github repo, no tearing my hair out at 2 AM after X suddenly stopped launching. None of that, it just worked.

For those interested, it's a Logitech HD Pro Webcam C920 on Ubuntu 18.04.

submitted by /u/kennethjor
[link] [comments]
          LXer: Canonical Releases Important Ubuntu Kernel Live Patch to Fix L1TF, SpectreRSB      Cache   Translate Page      
Canonical released a new kernel live patch for all its supported Ubuntu Linux operating systems to address several critical security vulnerabilities discovered by various researchers lately.
          Canonical Releases Important Ubuntu Kernel Live Patch to Fix L1TF, SpectreRSB      Cache   Translate Page      
Canonical released a new kernel live patch for all its supported Ubuntu Linux operating systems to address several critical security vulnerabilities discovered by various researchers lately.
          Gentoo-Based Calculate Linux 18 Released with Linux Kernel 4.18, Faster Boot      Cache   Translate Page      
Alexander Tratsevskiy announced the release of Calculate Linux 18, a major version of his Gentoo-based operating system targeting the Russian Linux community.
          PostgreSQL Database Administrator - Upgrade - Montreal, WI      Cache   Translate Page      
Solid Linux fundamentals including kernel and OS tuning, as they relate to DB performance and security. Upgrade is a consumer credit platform that is changing...
From Upgrade - Wed, 22 Aug 2018 22:02:31 GMT - View all Montreal, WI jobs
          [tip:x86/mm 4/4] htmldocs: kernel/resource.c:337: warning: Functio ...      Cache   Translate Page      
kbuild test robot writes: (Summary) tree: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/mm head: b69c2e20f6e4046da84ce5b33ba1ef89cb087b40
commit: b69c2e20f6e4046da84ce5b33ba1ef89cb087b40 [4/4] resource: Clean it up a bit reproduce: make htmldocs
reproduce: make htmldocs
All warnings (new ones prefixed by >>):
All warnings (new ones prefixed by >>):
WARNING: convert(1) not found, for SVG to PDF conversion install ImageMagick (https://www.imagemagick.org) kernel/resource.c:337: warning: Function parameter or member 'start' not described in 'find_next_iomem_res' kernel/resource.c:337: warning: Function parameter or member 'end' not described in 'find_next_iomem_res' kernel/resource.c:337: warning: Function parameter or member 'flags' not described in 'find_next_iomem_res' kernel/resource.c:337: warning: Function parameter or member 'desc' not described in 'find_next_iomem_res' kernel/re
          Re: [BUG -next 20181008] list corruption with "mm/slub: remove use ...      Cache   Translate Page      
Andrew Morton writes: On Tue, 9 Oct 2018 08:35:00 +0200 Heiko Carstens <heiko.carstens@de.ibm.com> wrote: On Tue, 9 Oct 2018 08:35:00 +0200 Heiko Carstens <heiko.carstens@de.ibm.com> wrote: kernel BUG at lib/list_debug.c:31!
Thanks much. I'll drop
mm-slub-remove-useless-condition-in-deactivate_slab.patch. mm-slub-remove-useless-condition-in-deactivate_slab.patch. mm-slub-remove-useless-condition-in-deactivate_slab.patch.
          [GIT PULL] percpu fixes for-4.19-rc8      Cache   Translate Page      
Dennis Zhou writes: (Summary) This caused a memory leak when percpu memory is being churned resulting in the allocation and deallocation of percpu memory chunks.
chunks.
Thanks,
Dennis
Dennis
The following changes since commit 0238df646e6224016a45505d2c111a24669ebe21: The following changes since commit 0238df646e6224016a45505d2c111a24669ebe21: Linux 4.19-rc7 (2018-10-07 17:26:02 +0200)
Linux 4.19-rc7 (2018-10-07 17:26:02 +0200)
are available in the Git repository at:
are available in the Git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu.git for-4.19-fixes git://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu.git for-4.19-fixes for you to fetch changes up to 6685b357363bfe295e3ae73665014db4aed62c58: for you to fetch changes up to 6685b357363bfe295e3ae73665014db4aed62c58: percpu: stop leaking bitmap metadata blocks (2018-10-07 14:50:12 -0700) percpu: stop leaking bitmap metadata blocks (2018-10-07 14:50:12 -0700) -------------------------------
          Re: perf report segfault      Cache   Translate Page      
Jiri Olsa writes: (Summary) $ git clone git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git $ cd linux
$ git checkout -b perf/core origin/perf/core
$ cd tools/perf
$ make
$ ./perf ...
$ ./perf ...
you might need some packages mentioned in (search for 'yum install'): https://perf.wiki.kernel.org/index.php/Jolsa_Howto_Install_Sources https://perf.wiki.kernel.org/index.php/Jolsa_Howto_Install_Sources it's little outdated, but the packages lists will do it's little outdated, but the packages lists will do thanks,
jirka
jirka
jirka

          Internship- Product Development- VM Monitor Group - VMware - Palo Alto, CA      Cache   Translate Page      
VMware is a global leader in cloud infrastructure and business mobility. Excellent knowledge of OS kernel internals, including memory management, resource...
From VMware - Mon, 01 Oct 2018 19:00:23 GMT - View all Palo Alto, CA jobs
          [tip:x86/mm 3/4] htmldocs: kernel/resource.c:338: warning: Functio ...      Cache   Translate Page      
kbuild test robot writes: (Summary) tree: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/mm head: b69c2e20f6e4046da84ce5b33ba1ef89cb087b40
commit: 010a93bf97c72f43aac664d0a685942f83d1a103 [3/4] resource: Fix find_next_iomem_res() iteration issue reproduce: make htmldocs
reproduce: make htmldocs
All warnings (new ones prefixed by >>):
All warnings (new ones prefixed by >>):
WARNING: convert(1) not found, for SVG to PDF conversion install ImageMagick (https://www.imagemagick.org) kernel/resource.c:338: warning: Function parameter or member 'first_level_children_only' not described in 'find_next_iomem_res' include/linux/srcu.h:175: warning: Function parameter or member 'p' not described in 'srcu_dereference_notrace' include/linux/srcu.h:175: warning: Function parameter or member 'sp' not described in 'srcu_dereference_notrace' include/linux/gfp.h:1: warning: no structured comme
          Re: [RFC PATCH] kernel/panic: Filter out a potential trailing newline      Cache   Translate Page      
Steven Rostedt writes: On Tue, 9 Oct 2018 22:50:19 +0200
Borislav Petkov <bp@alien8.de> wrote:
Borislav Petkov <bp@alien8.de> wrote:
/*
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> -- Steve
-- Steve
-- Steve

          Re: [PATCH] printk: inject caller information into the body of message      Cache   Translate Page      
Tetsuo Handa writes: (Summary) As soon as we reach out of statically preallocated buffers, we need to fallback to unbuffered printk() before such threshold elapses.
elapses.
And I would like to be sure that the API is sane.
If we worry about get_printk_buffer() without corresponding put_printk_buffer(), we will also need to worry about a "struct printk_buffer" returned by get_printk_buffer() is by error shared by multiple threads. Showing the backtrace (by enabling a debug kernel config option for this API) will be sufficient.
option for this API) will be sufficient.
option for this API) will be sufficient.

          Re: [RFC PATCH] kernel/panic: Filter out a potential trailing newline      Cache   Translate Page      
Kees Cook writes: On Tue, Oct 9, 2018 at 1:50 PM, Borislav Petkov <bp@alien8.de> wrote: Cc: x86@kernel.org
Ah yes, I like this. :)
Ah yes, I like this. :)
Acked-by: Kees Cook <keescook@chromium.org>
Acked-by: Kees Cook <keescook@chromium.org>
-Kees
-Kees
2.19.0.271.gfe8321ec057f
2.19.0.271.gfe8321ec057f
2.19.0.271.gfe8321ec057f

          Re: [PATCH 4.18 000/168] 4.18.13-stable review      Cache   Translate Page      
Guenter Roeck writes: (Summary) On Mon, Oct 08, 2018 at 08:29:40PM +0200, Greg Kroah-Hartman wrote: Anything received after that time might be too late. Build results:
total: 136 pass: 136 fail: 0
Qemu test results:
total: 321 pass: 321 fail: 0
total: 321 pass: 321 fail: 0
Details are available at https://kerneltests.org/builders/. Details are available at https://kerneltests.org/builders/. Guenter
Guenter
Guenter

          Re: [PATCH 4.14 00/94] 4.14.75-stable review      Cache   Translate Page      
Guenter Roeck writes: (Summary) On Mon, Oct 08, 2018 at 08:30:41PM +0200, Greg Kroah-Hartman wrote: Anything received after that time might be too late. Build results:
total: 150 pass: 150 fail: 0
Qemu test results:
total: 318 pass: 318 fail: 0
total: 318 pass: 318 fail: 0
Details are available at https://kerneltests.org/builders/. Details are available at https://kerneltests.org/builders/. Guenter
Guenter
Guenter

          Re: [PATCH 4.9 00/59] 4.9.132-stable review      Cache   Translate Page      
Guenter Roeck writes: (Summary) On Mon, Oct 08, 2018 at 08:31:07PM +0200, Greg Kroah-Hartman wrote: Anything received after that time might be too late. Build results:
total: 150 pass: 150 fail: 0
Qemu test results:
total: 308 pass: 308 fail: 0
total: 308 pass: 308 fail: 0
Details are available at https://kerneltests.org/builders/. Details are available at https://kerneltests.org/builders/. Guenter
Guenter
Guenter

          livelock with hrtimer cpu_base->lock      Cache   Translate Page      
Sodagudi Prasad writes: (Summary) Hi Will,
Hi Will,
This is regarding - thread "try to fix contention between expire_timers and try_to_del_timer_sync".
https://lkml.org/lkml/2017/7/28/172
https://lkml.org/lkml/2017/7/28/172
I think this live lockup issue was discussed earlier but the final set of changes were not concluded.
I would like to check whether you have new updates on this issue or not. I am thinking that fixing this at the cpu_relax() level.
fixing this at the cpu_relax() level.
+++ b/kernel/time/hrtimer.c @@ -52,6 +52,7 @@ #include <linux/timer.h>
          [RFC PATCH] kernel/panic: Filter out a potential trailing newline      Cache   Translate Page      
Borislav Petkov writes: (Summary) From: Borislav Petkov <bp@suse.de>
From: Borislav Petkov <bp@suse.de>
If a call to panic() terminates the string with a \n, the result puts the closing brace ']---' on a newline because panic() itself adds \n too.
too.
Now, if one goes and removes the newline chars from all panic() invocations - and the stats right now look like this: invocations - and the stats right now look like this: ~300 calls with an \n
~500 calls without a \n
~500 calls without a \n
one is destined to a neverending game of whack-a-mole because the usual thing to do is add a newline at the end of a string a function is supposed to print.
supposed to print.
Therefore, simply zap any \n at the end of the panic string to avoid touching so many places in the kernel.
touching so many places in the kernel.
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
          Markus Koschany: My Free Software Activities in September 2018      Cache   Translate Page      

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • Yavor Doganov continued his heroics in September and completed the port to GTK 3 of teg, a risk-like game. (#907834) Then he went on to fix gnome-breakout.
  • I packaged a new upstream release of freesweep, a minesweeper game, which fixed some minor bugs but unfortunately not #907750.
  • I spent most of the time this month on packaging a newer upstream version of unknown-horizons, a strategy game similar to the old Anno games. After also upgrading the fife engine, fifechan and NMUing python-enet, the game is up-to-date again.
  • More new upstream versions this month: atomix, springlobby, pygame-sdl2, and renpy.
  • I updated widelands to fix an incomplete appdata file (#857644) and to make the desktop icon visible again.
  • I enabled gconf support in morris (#908611) again because gconf will be supported in Buster.
  • Drascula, a classic adventure game, refused to start because of changes to the ScummVM engine. It is working now. (#908864)
  • In other news I backported freeorion to Stretch and sponsored a new version of the runescape wrapper for Carlos Donizete Froes.

Debian Java

  • Only late in September I found the time to work on JavaFX but by then Emmanuel Bourg had already done most of the work and upgraded OpenJFX to version 11. We now have a couple of broken packages (again) because JavaFX is no longer tied to the JRE but is designed more like a library. Since most projects still cling to JavaFX 8 we have to fix several build systems by accommodating those new circumstances.  Surely there will be more to report next month.
  • A Ubuntu user reported that importing furniture libraries was no longer possible in sweethome3d (LP: #1773532) when it is run with OpenJDK 10. Although upstream is more interested in supporting Java 6, another user found a fix which I could apply too.
  • New upstream versions this month: jboss-modules, libtwelvemonkeys-java, robocode, apktool, activemq (RC #907688), cup and jflex. The cup/jflex update required a careful order of uploads because both packages depend on each other. After I confirmed that all reverse-dependencies worked as expected, both parsers are up-to-date again.
  • I submitted two point updates for dom4j and tomcat-native to fix several security issues in Stretch.

Misc

  • Firefox 60 landed in Stretch which broke all xul-* based browser plugins. I thought it made sense to backport at least two popular addons, ublock-origin and https-everywhere, to Stretch.
  • I also prepared another security update for discount (DSA-4293-1) and uploaded  libx11 to Stretch to fix three open CVE.

Debian LTS

This was my thirty-first month as a paid contributor and I have been paid to work 29,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 24.09.2018 until 30.09.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in dom4j, otrs2, strongswan, python2.7, udisks2, asterisk, php-horde, php-horde-core, php-horde-kronolith, binutils, jasperreports, monitoring-plugins, percona-xtrabackup, poppler, jekyll and golang-go.net-dev.
  • DLA-1499-1. Issued a security update for discount fixing 4 CVE.
  • DLA-1504-1. Issued a security update for ghostscript fixing 14 CVE.
  • DLA-1506-1. Announced a security update for intel-microcode.
  • DLA-1507-1. Issued a security update for libapache2-mod-perl2 fixing 1 CVE.
  • DLA-1510-1. Issued a security update for glusterfs fixing 11 CVE.
  • DLA-1511-1. Issued an update for reportbug.
  • DLA-1513-1. Issued a security update for openafs fixing 3 CVE.
  • DLA-1517-1. Issued a security update for dom4j fixing 1 CVE.
  • DLA-1523-1. Issued a security update for asterisk fixing 1 CVE.
  • DLA-1527-1 and DLA-1527-2. Issued a security update for ghostscript fixing 2 CVE and corrected an incomplete fix for CVE-2018-16543 later.
  • I reviewed and uploaded strongswan and otrs2 for Abhijith PA.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my fourth month and I have been paid to work 15  hours on ELTS.

  • I was in charge of our ELTS frontdesk from 10.09.2018 until 16.09.2018 and I triaged CVE in samba, activemq, chromium-browser, curl, dom4j, ghostscript, firefox-esr, elfutils, gitolite, glib2.0, glusterfs, imagemagick, lcms2, lcms, jhead, libpodofo, libtasn1-3, mgetty, opensc, openafs, okular, php5, smarty3, radare, sympa, wireshark, zsh, zziplib and intel-microcode.
  • ELA-35-1. Issued a security update for samba fixing 1 CVE.
  • ELA-36-1. Issued a security update for curl fixing 1 CVE.
  • ELA-37-2. Issued a regression update for openssh.
  • ELA-39-1. Issued a security update for intel-microcode addressing 6 CVE.
  • ELA-42-1. Issued a security update for libapache2-mod-perl2 fixing 1 CVE.
  • ELA-45-1. Issued a security update for dom4j fixing 1 CVE.
  • I started to work on a security update for the Linux kernel which will be released shortly.

Thanks for reading and see you next time.


          [LKP] fde06e0775 [ 9.203072] kernel BUG at lib/list_debug.c:31!      Cache   Translate Page      
kernel test robot writes: (Summary)
          Re: [PATCH v4 2/3] mm: introduce put_user_page*(), placeholder ver ...      Cache   Translate Page      
John Hubbard writes: (Summary) On 10/9/18 4:20 PM, Andrew Morton wrote:
Methinks a bit more explanation is needed in these changelogs? Did the proposed steps in the changelogs, such as: clearer. Did the proposed steps in the changelogs, such as: [2] https://lkml.kernel.org/r/20180709080554.21931-1-jhubbard@nvidia.com Proposed steps for fixing get_user_pages() + DMA problems.
can be done in follow up patchsets.
I'm working on an RFC that will show what the long-term fix to get_user_pages and put_user_pages will look like.
          Global Cashew Nut (as Kernels) Market Insights, Forecast to 2025      Cache   Translate Page      
This report researches the worldwide Cashew Nut (as Kernels) market size (value, capacity, production and consumption) in key regions like North America, Europe, Asia Pacific (China, Japan) and other regions. This study categorizes the global Cashew Nut (as Kernels) breakdown data by...

Original Post Global Cashew Nut (as Kernels) Market Insights, Forecast to 2025 source Market Research Reports
          Software Developer - Wafer Space Semiconductor Technologies Private Limited - Bengaluru, Karnataka      Cache   Translate Page      
Must have experience with Architecture level with multiple SW technology development and e2e product integration and product scope(kernel, MW, android framework...
From Monster IN - Tue, 09 Oct 2018 14:34:36 GMT - View all Bengaluru, Karnataka jobs
          Proxmox Mail Gateway 5.1 released      Cache   Translate Page      

Proxmox Server Solutions released version 5.1 of its open-source email security platform Proxmox Mail Gateway. The Mail Gateway is an operating system based on Debian Stretch 9.5 with a 4.15 kernel. The anti-spam and anti-virus filtering solution functions like a full featured mail proxy deployed between the firewall and the internal mail server and protects organizations against spam, viruses, Trojans, and phishing emails. Proxmox Mail Gateway 5.1 comes with Debian security updates, new features, bug … More

The post Proxmox Mail Gateway 5.1 released appeared first on Help Net Security.


          Odp: Jaki Linux wybrać - aby mieć dostęp do największej liczby poradników wideo.      Cache   Translate Page      
Cytuj
b) nie ma żadnych niezależnych benchmarków potwierdzających zajebistość tego kernela.
Jeden artykul znalazlem.
https://www.dobreprogramy.pl/Cala-prawda-o-kernelach-eXt73-sprawdzamy-co-moze-dac-platna-optymalizacja-Kubuntu,News,59826.html...
          Odp: Jaki Linux wybrać - aby mieć dostęp do największej liczby poradników wideo.      Cache   Translate Page      
Nie wiem o co Ci chodzi ?

Chodzi mi o pierdylion razy powtórzoną na żywca przeklejoną całą formułke z linkiem do tego dziwnego kernela, czyli podręcznikowe pozy...
          Odp: Jaki Linux wybrać - aby mieć dostęp do największej liczby poradników wideo.      Cache   Translate Page      
Lekkie OT, ale o kernelu, skoro padł był tutaj i leży jak długi.

Dystrybucje tworzą niekiedy własne kernele z jakichś swoich potrzeb. Np. tam gdzie jest AppArmor (Ubuntu) musi to być wgrane, tam gdzie jest SELinux (lub podobne - Fedora) również. Niektó...
          Software Developer - Wafer Space Semiconductor Technologies Private Limited - Bengaluru, Karnataka      Cache   Translate Page      
Must have experience with Architecture level with multiple SW technology development and e2e product integration and product scope(kernel, MW, android framework...
From Monster IN - Tue, 09 Oct 2018 14:34:36 GMT - View all Bengaluru, Karnataka jobs
          Offer - NGO Reviews - Newcastle      Cache   Translate Page      
Look for Filipino recipes that can accommodate colorful substances corresponding to peas, bell peppers and corn kernels. You'll be able to even add a little bit of food colour if the recipe allows. Well-known Filipino Meals healthy amazing recipes to Sample at Lutong Bahay Website. In the event that they know they're helping put the meals collectively, they'll be more open to your ideas.
          "Thermal Pressure" Kernel Feature Would Help Linux Performance When Running Hot      Cache   Translate Page      
Linaro engineer Thara Gopinath sent out an experimental set of kernel patches today that introduces the concept of "thermal pressure" to the Linux kernel for helping assist Linux performance when the processor cores are running hot...
          ROCm 1.9.1 Released With Vega 7nm DPM Support, Profiling Fix      Cache   Translate Page      
As a follow-up to the ROCm 1.9 release from a month ago that brought initial Vega 20 support, upstream kernel compatibility with the AMDKFD code, and other improvements, ROCm 1.9.1 was quietly released a few days ago...
          When is a MySQL error not a MySQL error      Cache   Translate Page      

Photo by Cassidy Mills on Unsplash

I came across this error recently: Mysql2::Error: Can't connect to MySQL server on 'some-db-server.example.com' (113)

A quick search on the Internet, resulted in various Q & A sites hinting at a connectivity/routing issue to/from the MySQL server.

Whilst this was probably enough information for me to fix, if the problem exists on a 3rd party's infrastructure you want to provide a bit more information.

The first port of call was to see if the error code 113 appears in the MySQL reference. You can imagine my surprise when I couldn't find 113 anywhere in this chapter.

Luckily there is help available from MySQL in the form of a utility called perror that allows you to look up MySQL error codes.

By typing perror along with error code, you'll get the following:

$ perror 113
OS error code 113:  No route to host

So the reason we can't find this error in either the Client or Server sections of the MySQL reference manual is that it's an operating system error.

The operating system in question is Linux, so we know we're looking for C error number codes (errno.h). If you've got access to the kernel source you can find it in /usr/src/linux-source-<VERSION>/include/uapi/asm-generic/errno.h if you don't have the source installed you can see it see a definition of 113 GitHub:

#define    EHOSTUNREACH    113    /* No route to host */

So armed with this information, I could contact the 3rd party and ask them to check routing and firewall rules between us and the database server.


          Debian GNU/Linux 9 "Stretch" Gets New Kernel Patch to Fix Two Security Flaws      Cache   Translate Page      

Coming just a week after the latest major kernel security update for Debian GNU/Linux 9 "Stretch," the new Linux kernel security patch is here to address a flaw (CVE-2018-15471) discovered by Google Project Zero's Felix Wilhelm in the hash handling of Linux kernel's xen-netback module, which could result in information leaks, privilege escalation, as well as denial of service.

"Felix Wilhelm of Google Project Zero discovered a flaw in the hash handling of the xen-netback Linux kernel module. A malicious or buggy frontend may cause the (usually privileged) backend to make out of bounds memory accesses, potentially resulting in privilege escalation, denial of service, or information leaks," reads the security advisory published by Salvatore Bonaccorso.

Read more

read more


           Linus' Behavior and the Kernel Development Community      Cache   Translate Page      

On September 16, 2018, Linus Torvalds released the 4.19-rc4 version of the kernel, and he also announced he was taking a break from Linux development in order to consider his own behavior and to come up with a better approach to kernel development. This was partly inspired by his realization that he wasn't looking forward to the Kernel Summit event, and he said that "it wasn't actually funny or a good sign that I was hoping to just skip the yearly kernel summit entirely."

Read more

read more


          Proxmox Mail Gateway 5.1 released      Cache   Translate Page      

Proxmox Server Solutions released version 5.1 of its open-source email security platform Proxmox Mail Gateway. The Mail Gateway is an operating system based on Debian Stretch 9.5 with a 4.15 kernel. The anti-spam and anti-virus filtering solution functions like a full featured mail proxy deployed between the firewall and the internal mail server and protects organizations against spam, viruses, Trojans, and phishing emails. Proxmox Mail Gateway 5.1 comes with Debian security updates, new features, bug … More

The post Proxmox Mail Gateway 5.1 released appeared first on Help Net Security.


          Global Apricot Kernels Market Research Report 2018       Cache   Translate Page      
(EMAILWIRE.COM, October 10, 2018 ) This market intelligence report is a comprehensive analysis of the situation of Apricot Kernels Market A detailed investigation of the past progress, present market scenario, and future prospects has been offered in the report. It also gives accurate data of the...
          macOS 和 Linux 内核之间有什么不同?      Cache   Translate Page      

有些人可能会认为在 macOS 和 linux 内核之间存在相似之处,因为它们可以处理类似的命令并运行类似的软件。 一些人甚至认为 Apple 的 macOS 是基于 Linux。事实上,两个内核有着不同的历史和特性。 今天,我们就来看看 macOS 和 Linux 内核之间的差异。


macOS 和 Linux 内核之间有什么不同?
macOS 内核的历史

我们将从 macOS 内核的历史开始。 1985 年,由于首席执行官 John Sculley 和苹果董事会的失败,史蒂夫乔布斯离开了苹果公司。 然后,他成立了一家名为 NeXT 的新电脑公司。 乔布斯想要把一台(带有新的操作系统的)新电脑快速推向市场。 为了节省时间,NeXT 团队使用了来自卡内基梅隆的 马赫内核 和部分 BSD 代码库来创建 NeXTSTEP 操作系统 。

NeXT 从未取得财务成功,一部分归功于乔布斯的花钱习惯,就像他还在苹果公司一样。 与此同时,苹果公司曾多次尝试更新其操作系统,甚至与 IBM 合作。 1997 年,苹果公司以 4.29 亿美元收购了 NeXT。 作为交易的一部分,乔布斯回到了苹果公司,NeXTSTEP 成了 macOS 和 iOS 的基础。

Linux 内核的历史

不像 macOS 内核,Linux 不是作为商业努力的一部分而创建的。 相反,它是 由芬兰计算机科学学生 Linus Torvalds 在 1991 年创建的 。 最初,内核是按 Linus 的计算机规格编写的,因为他想使用其新的 80386 处理器。 Linus 在 1991 年 8 月向 Usenet 发布 了他的新内核代码。 很快,他就收到了来自世界各地的代码和功能建议。 次年 Orest Zborowski 将 X windows 系统移植到 Linux,使其能支持图形用户界面。

在过去的 27 年中,Linux 已经慢慢成长并被赋予了特性。 这不再是一个学生的小型项目。 现在它运行在 世界上 大多数计算机设备 和 超级计算机 上。 并不太糟。

macOS 内核的特性

macOS 内核官方称为 XNU。该 缩写 代表“XNU 不是 Unix”。 根据 苹果的 GitHub 页面 ,XNU 是“卡内基梅隆大学的马赫内核与用于编写驱动程序的 FreeBSD 组件和 C++ API 相结合的混合内核”。 代码的 BSD 子系统的部分是 “通常实现为微内核系统中的用户空间服务器” 。 马赫部分负责低级工作,诸如多任务,受保护的内存,虚拟内存管理,内核调试支持和控制台 I/O。

Linux 内核的特性

虽然 macOS 内核结合微内核( 马赫 )和单内核( BSD )的特性,但 Linux 只是一个单内核。 单内核 负责 CPU 管理,内存,进程间通信,设备驱动程序,文件系统和系统服务器调用。

Mac 和 Linux 内核的一行不同

macOS 内核(XNU)比 Linux 时间更长,是基于 2 个很早的代码库的组合。 另一方面,Linux 更新,从头开始编写,并在更多设备上使用。

如果您发现这篇文章很有趣,请花些时间在社交媒体,骇客新闻或 Reddit 上分享。

感想

最近体验了苹果的 iOS 系统,和Android 系统分别对应 Unix 与 Linux 内核。

可以感受到苹果以其流畅的系统和高标准的软件生态营造了较佳的用户体验。

偶然读到这篇文章,就分享给大家以作了解。

参照 macOS High Sierra - Apple The Linux Kernel Archives What is the Difference Between the macOS and Linux Kernels | It’s FOSS
          How newlines affect Linux kernel performance      Cache   Translate Page      

The linux kernel strives to be fast and efficient. As it is written mostly in C, it can mostly control how the generated machine code looks. Nevertheless, as the kernel code is compiled into machine code, the compiler optimizes the generated code to improve its performance. The kernel code, however, employs uncommon coding techniques, which can fail code optimizations. In this blog-post, I would share my experience in analyzing the reasons for poor code inlining of the kernel code. Although the performance improvement are not significant in most cases, understanding these issues are valuable in preventing them from becoming larger. New-lines, as promised, will be one of the reasons, though not the only one.

New lines in inline assembly

One fine day, I encountered a strange phenomenon: minor changes I performed in the Linux source code, caused small but noticeable performance degradation. As I expected these changes to actually improve performance, I decided to disassemble the functions which I changed. To my surprise, I realized that my change caused functions that were previously inlined, not to be inlined anymore. The decision not to inline these functions seem dubious as they were short.

I decided to further investigate this issue and to check whether it affects other parts of the kernel. Arguably, it is rather hard to say whether a function should be inlined, so some sort of indication of bad inlining decisions is needed. C functions that are declared with the inline keyword are not bound to be inlined by the compiler, so having a non-inlined function that is marked with the inline keyword is not an indication by itself for bad inlining decision.

Arguably, there are two simple heuristics to find functions which were suspiciously not inlined for the wrong reason. One heuristic is to look for short (binary-wise) functions by looking at the static symbols. A second heuristic is to look for functions which appear in multiple translation units (objects), as this might indicate they were declared as inline but were eventually not inlined, and that they are in common use. In both cases, there may be valid reasons for the compiler not to inline functions even if they are short, for example if they are used as a value for a function pointer. However, they can give an indication if something is "very wrong" in how inlining is performed, or more correctly, ignored.

In practice, I used both heuristics, but in this post I will only use the second one to check whether inlining decisions seem dubious. To do so I rebuild the kernel, using the localyesconfig make target to incorporate the modules into the core. I ensure the "kernel hacking" features in the config are off, as those tend to blow the size of the code and rightfully cause functions not to be inlined. I then looked for static function which had the most instances in the built kernel:

$ nm --print-size ./vmlinux | grep ' t ' | cut -d' ' -f2- | sort | uniq -c | grep -v '^ 1' | sort -n -r | head -n 5 <u>Instances Size Function Name</u> 36 0000000000000019 t <strong>copy_overflow</strong> 8 000000000000012f t jhash 8 000000000000000d t <strong>arch_local_save_flags</strong> 7 0000000000000017 t dst_output

6 000000000000004e t <strong>put_page</strong>

As seen, the results are suspicious. As mentioned before, in some cases there are good reasons for functions not to be inlined. jhash () is a big function (303 bytes) so it is reasonable for it is not to be inlined. dst_output () address is used as a function pointer, which causes it not to be inlined. Yet the other functions seem to be great candidates for inlining, and it is not clear why they are not inlined. Let's look at the source code of copy_overflow (), which has many instances in the binary:

static inline void copy_overflow(int size, unsigned long count) { WARN(1, "Buffer overflow detected (%d < %lu)!\n", size, count);

}

Will the disassembly tell us anything?

0xffffffff819315e0 <+0>: push %rbp 0xffffffff819315e1 <+1>: mov %rsi,%rdx 0xffffffff819315e4 <+4>: mov %edi,%esi 0xffffffff819315e6 <+6>: <strong>mov $0xffffffff820bc4b8,%rdi</strong> 0xffffffff819315ed <+13>: mov %rsp,%rbp 0xffffffff819315f0 <+16>: <strong>callq 0xffffffff81089b70 <__warn_printk></strong> 0xffffffff819315f5 <+21>: <strong>ud2</strong> 0xffffffff819315f7 <+23>: pop %rbp

0xffffffff819315f8 <+24>: retq

Apparently not. Notice that out of the 9 assembly instructions that are shown above, 6 deal with the function entry and exit - for example, updating the frame pointer, and only the 3 bolded ones are really needed.

To understand the problem, we must dig deeper and look at the warning mechanism in Linux. In x86, this mechanism shares the same infrastructure with the bug reporting mechanism. When a bug or a warning are triggered, the kernel prints the filename and the line number in the source-code that triggered the bug, which can then used to analyze the root-cause of the bug. A naive implementation, however, would cause the code-cache to be polluted with the this information as well as the function call to the function that prints the error message, consequently causing performance degradation.

Linux therefore uses a different scheme by setting an exception triggering instruction ( ud2 on x86) and saving the warning information in a bug table that is set in a different section in the executable. Once a warning is triggered using the WARN() macro, an exception is triggered and the exception handler looks for the warning information - the source-code filename and line - in the table.

Inline assembly is used to save this information in _BUG_FLAGS() . Here is its code after some simplifications to ease readability:

asm volatile("1: ud2\n" "<strong>.pushsection</strong> __bug_table,\"aw\"\n" "2: .long 1b - 2b\n" /* bug_entry::bug_addr */ " .long %c0 - 2b\n" /* bug_entry::file */ " .word %c1\n" /* bug_entry::line */ " .word %c2\n" /* bug_entry::flags */ " .org 2b+%c3\n" "<strong>.popsection</strong>" : : "i" (__FILE__), "i" (__LINE__), "i" (flags),

"i" (sizeof(struct bug_entry)));

Ignoring the assembly shenanigans that this code uses, we can see that in practice it generates a single ud2 instruction. However, the compiler considers this code to be "big" and consequently oftentimes does not inline functions that use WARN() or similar functions.

The reason turns to be the newline characters (marked as '\n' above). The kernel compiler, GCC, is unaware to the code size that will be generated by the inline assembly. It therefore tries to estimate its size based on newline characters and statement separators (';' on x86). In GCC, we can see the code that performs this estimation in the estimate_num_insns() function:

int estimate_num_insns (gimple *stmt, eni_weights *weights) { ... case GIMPLE_ASM: { int count = asm_str_count (gimple_asm_string (as_a <gasm *> (stmt))); /* 1000 means infinity. This avoids overflows later with very long asm statements. */ if (count > 1000) count = 1000; return count; } ..

}

Note that this pattern, of saving data using inline assembly, is not limited to bugs and warnings. The kernel uses it for many additional purposes: exception tables, that gracefully handle an exception that is triggered inside the kernel; alternative instructions table, that tailors the kernel on boot-time to the specific CPU architecture extensions that are supported; annotations that are used for stack metadata validation by objtool and so on.

Before we get to solving this problem, a question needs to be raised: is the current behavior flawed at all? Eventually, the size of the kernel will increase if functions that use WARN(), for example, will be inlined. This increase in size can cause the kernel image to be bigger, and since the Linux kernel cannot be paged out, will also increase memory consumption. However, the main reason that the compiler strives to avoid inflation of the code size is to avoid pressure on the instruction cache, whose impact may offset inlining benefits. Moreover, the heuristics of other compiler optimizations (e.g., loop optimizations) depend on the size of the code.

Solving the problem is not trivial. Ideally, GCC would have used an integrated assembler, similarly to LLVM , which would give better estimation of the generated code size of inline assembly. Experimentally, LLVM seems to make the right inlining decisions and is not affected by new-lines or data that is set in other sections of the executable. Interestingly, it appears to do so even when the integrated assembler is not used for assembly. GCC, however, uses the GNU assembler after the code is compiled, which prevents it from getting a correct estimation of the code size.

Alternatively, the problem could have been solved by overriding GCC's code size estimation through a directive or a built-in function. However, looking at GCC code does not reveal a direct or indirect way to achieve this goal.

One may think that using the always_inline function attribute to force the compiler to inline functions would solve the problem. It appears that some have encountered the problem of poor inlining decisions in the past, without understanding the root-cause and used this solution . However, this solution has several drawbacks. First, it is hard to make and maintain these annotations. Second, this solution does not address other code optimizations the rely on code-size estimation. Third, the kernel uses various configurations and supports multiple CPU architectures, which may require a certain function to be inlined in some setups and not inlined in other. Finally, and most importantly, using always_inline can just push the problem upwards to calling functions, as we will later see.

Therefore, a more systematic solution is needed. The solution comes in the form of assembly macros that are set to hold the long assembly code, and use a single line inside the inline assembly that calls the macro. This solution does not only improve the generated machine code, but makes the assembly code more readable, as it prevents various quirks that are required in inline assembly, for example new-line characters. Moreover, in certain cases this change allows to consolidate the currently separate implementations that are used in C and assembly, which eases code maintenance.

Addressing the issue shows a performance improvement of tens of cycles for certain system calls, which are indeed not too notable. After addressing these issues, we see copy_overflow() and other functions disappear from the commonly non-inlined inline functions list.

<u>Instances Size Function Name</u> 9 000000000000012f t jhash 8 0000000000000011 t <strong>kzalloc</strong> 7 0000000000000017 t dst_output 5 000000000000002f t <strong>acpi_os_allocate_zeroed</strong>

5 0000000000000029 t <strong>acpi_os_allocate</strong>

However, we got some new ones. Lets try to understand where do they come from.

Constant computations and inlining

As shown, kzalloc () is not always inlined, although its code is very simple.

static inline void *kzalloc(size_t size, gfp_t flags) { return kmalloc(size, flags | __GFP_ZERO);

}

The assembly, again does not provide any answers as to why it is not inlined:

0xffffffff817929e0 <+0>: push %rbp 0xffffffff817929e1 <+1>: <strong>mov $0x14080c0,%esi</strong> 0xffffffff817929e6 <+6>: mov %rsp,%rbp 0xffffffff817929e9 <+9>: <strong>callq 0xffffffff8125d590 <__kmalloc></strong> 0xffffffff817929ee <+14>: pop %rbp

0xffffffff817929ef <+15>: retq

The answer to our question lies in kmalloc(), which is called by kzalloc() and is considered to have many instructions by GCC heuristics. kmalloc() is inlined since it is marked with the always_inline attribute, but its estimated instruction count is then attributed to the calling function, kzalloc() in this case. This result exemplifies why the use of the always_inline attribute is not a sufficient solution for code inlining problem.

Still, it is not clear why GCC estimates that kmalloc() would be compiled into many instructions. As shown, it is compiled into a single call to __kmalloc(). To answer this question, we need to follow kmalloc() code, which eventually uses the ilog2() macro to compute the log2 of an integer, in order to compute the page allocation order.

Here is a and shortened version of ilog2():

#define ilog2(n) \ ( \ __builtin_constant_p(n) ? ( \ /* <strong>Optimized version for constants</strong> */ \ (n) < 2 ? 0 : \ (n) & (1ULL << 63) ? 63 : \ (n) & (1ULL << 62) ? 62 : \ ... (n) & (1ULL << 3) ? 3 : \ (n) & (1ULL << 2) ? 2 : \ 1 ) : \ /* <strong>Another version for non-constants</strong> */ \ (sizeof(n) <= 4) ? \ __ilog2_u32(n) : \ __ilog2_u64(n) \

}

As shown, the macro first uses the built-in function __builtin_constant_p () to determine whether n is known to be a constant during compilation time. If n is known to be constant, a long series of conditions is evaluated to compute the result during compilation time, which allows further optimizations. Otherwise, if n is not known to be constant, a short code is emitted to compute during runtime the result. Yet, regardless of whether n is constant or not, all of the conditions in the ilog2() macro are evaluated during compilation time and do not translate into any machine code instructions.

However, although the generated code is efficient, it causes GCC, again, to estimate the number of instructions that ilog2() takes incorrectly. Apparently, the number of instructions is estimated before inlining decisions take place, and in this stage the compiler usually still does not know whether n is constant. Later, after inlining decisions are performed, GCC cannot update the instruction count estimation accordingly.

This inlining problem is not as common as the previous one, yet it is not rare. Bit operations (e.g., test_bit()) and bitmaps commonly use __builtin_constant_p() in the described manner. As a result, functions that use these facilities, for example cpumask_weight(), are not inlined.

A possible solution for this problem is to use the built-in __builtin_choose_expr() to test __builtin_constant_p() instead of using C if-conditions and conditional operators (?:) :

#define ilog2(n) \ ( \ <strong>__builtin_choose_expr</strong>(__builtin_constant_p(n), \ ((n) < 2 ? 0 : \ (n) & (1ULL << 63) ? 63 : \ (n) & (1ULL << 62) ? 62 : \ ... (n) & (1ULL << 3) ? 3 : \ (n) & (1ULL << 2) ? 2 : \ 1 )), \ (sizeof(n) <= 4) ? \ __ilog2_u32(n) : \ __ilog2_u64(n) \

}

This built-in is evaluated earlier in the compilation process, before inlining decisions are being made. Yet, there is a catch: as this built-in is evaluated earlier, GCC is only able to determine that an argument is constant for constant expressions, which can cause less efficient code to be generated. For instance, if a constant was given as a function argument, GCC will not be able to determine it is constant. In the following case, for example, the non-constant version will be used:

int bar(int n) { return ilog2(n) } int foo(int n) { return bar(n); }

v = foo(bar(5)); /* will use the non-constant version */

It is therefore questionable whether using __builtin_choose_expr() is an appropriate solution. Perhaps it is better to just mark functions such as kzalloc() with the always_inline attribute. Compiling using LLVM reveals, again, that LLVM inlining decisions are not negatively affected by the use of __builtin_constant_p().

Function attributes

Finally, there are certain function attributes that affect inlining decision. Using function attributes to set an optimization levels for specific functions can prevent the compiler from inlining the functions or functions that are called by them. The Linux kernel rarely uses such attributes, but one of its uses is in the KVM function vmx_vcpu_run () which is a very hot function that launches or resumes the virtual machine. The use of the optimization attribute in this function is actually just to prevent cloning of the function. Its side-effect is, however, that all the functions it uses are not inlined, including, for example the function to_vmx() :

0x0000000000000150 <+0>: push %rbp 0x0000000000000151 <+1>: <strong>mov %rdi,%rax</strong> 0x0000000000000154 <+4>: mov %rsp,%rbp 0x0000000000000157 <+7>: pop %rbp

0x0000000000000158 <+8>: retq

This function just returns as an output the same argument it got as an input. Not inlining functions that are called by vmx_vcpu_run() induces significant overhead, which can be as high as 10% for a VM-exit.

Finally, the cold function attribute causes inlining to be done less aggressively. This attribute informs the compiler that a function is unlikely to be executed, and the compiler, among other things, optimizes these functions for size rather than speed, which can result in very non-aggressive inlining decisions. All the __init and __exit functions, which are used during the kernel and modules (de)initializations are marked as cold . It is questionable whether this is the desired behavior.

Conclusions

Despite the fact that C appears to give us great control over the generated code, it is not always the case. Compiler extensions may be needed to give programmers greater control. Tools that analyze whether the generated binary is efficient, considering the source code, may be needed. In the meanwhile, there is no alternative to manual inspection of the generated binary code.

Thanks to Linus Torvalds, Hans Peter Anvin, Masahiro Yamada, Josh Poimboeuf, Peter Zijistra, Kees Cook, Ingo Molnar and others for their assistance in the analysis and in solving this problem.


          Software Developer - Wafer Space Semiconductor Technologies Private Limited - Bengaluru, Karnataka      Cache   Translate Page      
Must have experience with Architecture level with multiple SW technology development and e2e product integration and product scope(kernel, MW, android framework...
From Monster IN - Tue, 09 Oct 2018 14:34:36 GMT - View all Bengaluru, Karnataka jobs
          Presco Plc Current Vacancies [3 Positions]      Cache   Translate Page      
Presco is a fully-integrated agro-industrial establishment with oil palm plantations, palm oil mill, palm kernel crushing plant and vegetable oil refining and fractionation plant. It also has an olein and.....
          Windows 10 Mobile recibe la Build 15254.538 como acumulativa      Cache   Translate Page      

Aunque Windows 10 Mobile ya no recibirá nuevas funciones, si continua recibiendo actualizaciones acumulativas para solucionar errores y realizar actualizaciones relativas a la seguridad.

Si eres de los espartanos que resisten con Windows 10 Mobile, hoy estaras recibiendo una actualización que se te muestra como “2018-10 Actualización de Windows 10 Version 1709 para dispositivos basados en arm Phone”. Esto traducido significa que estas recibiendo la Build 15254.53, la cual viene como parche de seguridad bajo el código KB4464853.

En esta actualización acumulativa se ha actualizado diversos partes del sistema relativas a la seguridad de los mismos, las cuales se listan como sigue.

Actualizaciones de seguridad para Internet Explorer, Windows Media Player, Microsoft Graphics Component, Microsoft Edge, Windows Kernel, Windows Storage and Filesystems, y Microsoft Scripting Engine

Como siempre debes ir a configuración, Actualización y Seguridad, Windows Update y allí darle a buscar actualizaciones,

La entrada Windows 10 Mobile recibe la Build 15254.538 como acumulativa se publicó primero en OneWindows - Windows 10, Mobile y WP, noticias y aplicaciones.


          Nuevas acumulativas de Octubre para varias versiones de Windows 10      Cache   Translate Page      

Una nueva actualización acumulativa Build 17763.55 se ha lanzado para los usuarios de Windows 10 October 2018 Update, correspondiente a la versión 1809 y que llega con el código de parche KB4464330.

Como acumulativa viene con pocas novedades y solo se referencian dos novedades que os detallamos a continuación:

Soluciona un problema de vencimiento de la política de grupo que afectaba a un cálculo incorrecto en la temporización que podía eliminar prematuramente perfiles en dispositivos sujetos a la condición “Eliminar perfiles de usuario anteriores a un número especificado de dias.”Actualizaciones de seguridad para Windows Kernel, Microsoft Graphics Component, Microsoft Scripting Engine, Internet Explorer, Windows Storage and Filesystems, Windows Linux, Windows Wireless Networking, Windows MSXML, the Microsoft JET Database Engine, Windows Peripherals, Microsoft Edge, Windows Media Player e Internet Explorador.

También para versiones anteriores

Si estas en la versión 1803 del Windows 10, correspondiente a la April 2018 Update, la compilación que estarás recibiendo es la Build 17134.345 como parche KB4462919, que viene solamente con las actualizaciones de seguridad del Kernel, componentes gráficos, Edge, etc,

Si aún permaneces en la versión 1709, es decir en la Fall Creators Update, la que habrás recibido es el parche KB4462918 correspondiente a la Build 16299.726, que viene con actualizaciones de seguridad. Mientras que los usuarios en la Creators Update, versión 1703, reciben el parche KB4462937 o lo que es lo mismo la Build 15063.1387 con mas actualizaciones de seguridad.

Como es habitual esta actualización llega via Windows Update.

La entrada Nuevas acumulativas de Octubre para varias versiones de Windows 10 se publicó primero en OneWindows - Windows 10, Mobile y WP, noticias y aplicaciones.


          微软发布10月补丁修复51个安全问题      Cache   Translate Page      
微软于周二发布了10月安全更新补丁,修复了51个从简单的欺骗攻击到远程执行代码的安全问题,产品涉及.NET Core、Azure、Device Guard、Internet Explorer、Microsoft Edge、Microsoft Exchange Server、Microsoft Graphics Component、Microsoft JET Database Engine、Microsoft Office、Microsoft Office SharePoint、Microsoft Scripting Engine、Microsoft Windows、Microsoft Windows DNS、Microsoft XML Core Services、SQL Server、Windows - Linux、Windows Hyper-V、Windows Kernel、Windows Media Player以及Windows Shell。
          Talent International: Contract Linux Kernel Engineer (Embedded)      Cache   Translate Page      
£350 - £425 per day: Talent International: Contract Linux Kernel Engineer (Embedded) One of Talent International's clients in the SW requires an embedded c++ engineer on a contract basis. I am looking for solid Linux candidates, with PCI, DMA device driver experience ideally. KEY SKILLS/EXPERIENCE Bath
          How max_prepared_stmt_count bring down the production MySQL system      Cache   Translate Page      
MySQL Adventures: How max_prepared_stmt_count can bring down production We recently moved an On-Prem environment to GCP for better scalability and availability. The customer’s main database is MySQL. Due to the nature of customer’s business, it’s a highly transactional workload (one of the hot startups in APAC). To deal with the scale and meet availability requirements, we have deployed MySQL behind ProxySQL — which takes care of routing some of the resource intensive SELECTs to chosen replicas. The setup consists of: One Master Two slaves One Archive database server Post migration to GCP, everything was nice and calm for a couple of weeks, until MySQL decided to start misbehaving and leading to an outage. We were able to quickly resolve and bring the system back online and what follows are lessons from this experience. The configuration of the Database: CentOS 7. MySQL 5.6 32 Core CPU 120GB Memory 1 TB SSD for MySQL data volume. The total database size is 40GB. (yeah, it is small in size, but highly transactional) my.cnf is configured using Percona’s configuration wizard. All tables are InnoDB Engine No SWAP partitions. The Problem It all started with an alert that said MySQL process was killed by Linux’s OOM Killer. Apparently MySQL was rapidly consuming all the memory (about 120G) and OOM killer perceived it as a threat to the stability of the system and killed the process. We were perplexed and started investigating. Sep 11 06:56:39 mysql-master-node kernel: Out of memory: Kill process 4234 (mysqld) score 980 or sacrifice child Sep 11 06:56:39 mysql-master-node kernel: Killed process 4234 (mysqld) total-vm:199923400kB, anon-rss:120910528kB, file-rss:0kB, shmem-rss:0kB Sep 11 06:57:00 mysql-master-node mysqld: /usr/bin/mysqld_safe: line 183: 4234 Killed nohup /usr/sbin/mysqld --basedir=/usr --datadir=/mysqldata --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/log/mysqld.log --open-files-limit=65535 --pid-file=/var/run/mysqld/mysqld.pid --socket=/mysqldata/mysql.sock < /dev/null > /dev/null 2>&1 Naturally, we started looking at mysql configuration to see if something is off. InnoDB Parameters: innodb-flush-method = O_DIRECTinnodb-log-files-in-group = 2innodb-log-file-size = 512Minnodb-flush-log-at-trx-commit = 1innodb-file-per-table = 1innodb-buffer-pool-size = 100G Other Caching Parameters: tmp-table-size = 32Mmax-heap-table-size = 32Mquery-cache-type = 0query-cache-size = 0thread-cache-size = 50open-files-limit = 65535table-definition-cache = 4096table-open-cache = 50 We are not really using query cache and one of the heavy front end service is PHP Laravel. Here is the memory utilization graph. The three highlighted areas are the points at which we had issues in production. The second issue happened very shortly, so we reduced the innodb-buffer-pool-size to 90GB. But even though the memory utilization never came down. So we scheduled a cronjob to flush OS Cache at least to give some addition memory to the Operating system by using the following command. This was a temporary measure till we found the actual problem. sync; echo 3 > /proc/sys/vm/drop_cache But This didn’t help really. The memory was still growing and we had to look at what’s really inside the OS Cache? Fincore: There is a tool called fincore helped me find out what’s actually the OS cache held. Its actually using Perl modules. use the below commands to install this. yum install perl-Inline rpm -ivh http://fr2.rpmfind.net/linux/dag/redhat/el6/en/x86_64/dag/RPMS/fincore-1.9-1.el6.rf.x86_64.rpm It never directly shows what files are inside the buffer/cache. We instead have to manually give the path and it’ll check what files are in the cache for that location. I wanted to check about Cached files for the mysql data directory. cd /mysql-data-directory fincore -summary * > /tmp/cache_results Here is the sample output of the cached files results. page size: 4096 bytesauto.cnf: 1 incore page: 0dbadmin: no incore pages.Eztaxi: no incore pages.ibdata1: no incore pages.ib_logfile0: 131072 incore pages: 0 1 2 3 4 5 6 7 8 9 10......ib_logfile1: 131072 incore pages: 0 1 2 3 4 5 6 7 8 9 10......mysql: no incore pages.mysql-bin.000599: 8 incore pages: 0 1 2 3 4 5 6 7mysql-bin.000600: no incore pages.mysql-bin.000601: no incore pages.mysql-bin.000602: no incore pages.mysql-bin.000858: 232336 incore pages: 0 1 2 3 4 5 6 7 8 9 10......mysqld-relay-bin.000001: no incore pages.mysqld-relay-bin.index: no incore pages.mysql-error.log: 4 incore pages: 0 1 2 3mysql-general.log: no incore pages.mysql.pid: no incore pages.mysql-slow.log: no incore pages.mysql.sock: no incore pages.ON: no incore pages.performance_schema: no incore pages.mysql-production.pid: 1 incore page: 0 6621994 pages, 25.3 Gbytes in core for 305 files; 21711.46 pages, 4.8 Mbytes per file. The highlighted points show the graph when OS Cache is cleared.How we investigated this issue: The first document that everyone refers is How mysql uses the memory from MySQL’s documentation. So we started with where are all the places that mysql needs memory. I’ll explain this about in a different blog. Lets continue with the steps which we did. Make sure MySQL is the culprit: Run the below command and this will give you the exact memory consumption about MySQL. ps --no-headers -o "rss,cmd" -C mysqld | awk '{ sum+=$1 } END { printf ("%d%s\n", sum/NR/1024,"M") }' 119808M Additional Tips: If you want to know each mysql’s threads memory utilization, run the below command. # Get the PID of MySQL:ps aux | grep mysqld mysql 4378 41.1 76.7 56670968 47314448 ? Sl Sep12 6955:40 /usr/sbin/mysqld --basedir=/usr --datadir=/mysqldata --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/log/mysqld.log --open-files-limit=65535 --pid-file=/var/run/mysqld/mysqld.pid --socket=/mysqldata/mysql.sock # Get all threads memory usage:pmap -x 4378 4378: /usr/sbin/mysqld --basedir=/usr --datadir=/mysqldata --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/log/mysqld.log --open-files-limit=65535 --pid-file=/var/run/mysqld/mysqld.pid --socket=/mysqldata/mysql.sockAddress Kbytes RSS Dirty Mode Mapping0000000000400000 11828 4712 0 r-x-- mysqld000000000118d000 1720 760 476 rw--- mysqld000000000133b000 336 312 312 rw--- [ anon ]0000000002b62000 1282388 1282384 1282384 rw--- [ anon ]00007fd4b4000000 47816 37348 37348 rw--- [ anon ]00007fd4b6eb2000 17720 0 0 ----- [ anon ]00007fd4bc000000 48612 35364 35364 rw--- [ anon ]...........................00007fe1f0075000 2044 0 0 ----- libpthread-2.17.so00007fe1f0274000 4 4 4 r---- libpthread-2.17.so00007fe1f0275000 4 4 4 rw--- libpthread-2.17.so00007fe1f0276000 16 4 4 rw--- [ anon ]00007fe1f027a000 136 120 0 r-x-- ld-2.17.so00007fe1f029c000 2012 2008 2008 rw--- [ anon ]00007fe1f0493000 12 4 0 rw-s- [aio] (deleted)00007fe1f0496000 12 4 0 rw-s- [aio] (deleted)00007fe1f0499000 4 0 0 rw-s- [aio] (deleted)00007fe1f049a000 4 4 4 rw--- [ anon ]00007fe1f049b000 4 4 4 r---- ld-2.17.so00007fe1f049c000 4 4 4 rw--- ld-2.17.so00007fe1f049d000 4 4 4 rw--- [ anon ]00007ffecc0f1000 132 72 72 rw--- [ stack ]00007ffecc163000 8 4 0 r-x-- [ anon ]ffffffffff600000 4 0 0 r-x-- [ anon ]---------------- ------- ------- ------- total kB 122683392 47326976 47320388 InnoDB Buffer Pool: Initially we suspected the InnoDB. We have checked the innoDB usage from the monitoring system. But the result was negative. It never utilized more than 40GB. That thickens the plot. If buffer pool only has 40 GB, who is eating all that memory? Is this correct? Does Buffer Pool only hold 40GB? What’s Inside the BufferPool and whats its size? SELECT page_type AS page_type, sum(data_size) / 1024 / 1024 AS size_in_mbFROM information_schema.innodb_buffer_pageGROUP BY page_typeORDER BY size_in_mb DESC; +-------------------+----------------+| Page_Type | Size_in_MB |+-------------------+----------------+| INDEX | 39147.63660717 || IBUF_INDEX | 0.74043560 || UNDO_LOG | 0.00000000 || TRX_SYSTEM | 0.00000000 || ALLOCATED | 0.00000000 || INODE | 0.00000000 || BLOB | 0.00000000 || IBUF_BITMAP | 0.00000000 || EXTENT_DESCRIPTOR | 0.00000000 || SYSTEM | 0.00000000 || UNKNOWN | 0.00000000 || FILE_SPACE_HEADER | 0.00000000 |+-------------------+----------------+ A quick guide about this query. INDEX: B-Tree index IBUF_INDEX: Insert buffer index UNKNOWN: not allocated / unknown state TRX_SYSTEM: transaction system data Bonus: To get the buffer pool usage by index SELECT table_name AS table_name, index_name AS index_name, count(*) AS page_count, sum(data_size) / 1024 / 1024 AS size_in_mbFROM information_schema.innodb_buffer_pageGROUP BY table_name, index_nameORDER BY size_in_mb DESC; Then where mysql was holding the Memory? We checked all of the mysql parts where its utilizing memory. Here is a rough calculation for the memory utilization during the mysql crash. BufferPool: 40GBCache/Buffer: 8GBPerformance_schema: 2GBtmp_table_size: 32MOpen tables cache for 50 tables: 5GBConnections, thread_cache and others: 10GB Almost it reached 65GB, we can round it as 70GB out of 120GB. But still its approximate only. Something is wrong right? My DBA mind started to think where is the remaining? Till now, MySQL is the culprit who is consuming all of the memory. Clearing OS cache never helped. Its fine. Buffer Pool is also in healthy state. Other memory consuming parameters are looks good. It’s time to Dive into the MySQL. Lets see what kind of queries are running into the mysql. show global status like 'Com_%';+---------------------------+-----------+| Variable_name | Value |+---------------------------+-----------+| Com_admin_commands | 531242406 || Com_stmt_execute | 324240859 || Com_stmt_prepare | 308163476 || Com_select | 689078298 || Com_set_option | 108828057 || Com_begin | 4457256 || Com_change_db | 600 || Com_commit | 8170518 || Com_delete | 1164939 || Com_flush | 80 || Com_insert | 73050955 || Com_insert_select | 571272 || Com_kill | 660 || Com_rollback | 136654 || Com_show_binlogs | 2604 || Com_show_slave_status | 31245 || Com_show_status | 162247 || Com_show_tables | 1105 || Com_show_variables | 10428 || Com_update | 74384469 |+---------------------------+-----------+ Select, Insert, Update these counters are fine. But a huge amount of prepared statements were running into the mysql. One more Tip: Valgrind Valgrind is a powerful open source tool to profile any process’s memory consumption by threads and child processes. Install Valgrind: # You need C compilers, so install gcc wget ftp://sourceware.org/pub/valgrind/valgrind-3.13.0.tar.bz2tar -xf valgrind-3.13.0.tar.bz2 cd valgrind-3.13.0./configure makemake install Note: Its for troubleshooting purpose, you should stop MySQL and Run with Valgrind. Create an log file to Capture touch /tmp/massif.outchown mysql:mysql /tmp/massif.outchmod 777 /tmp/massif.out Run mysql with Valgrind /usr/local/bin/valgrind --tool=massif --massif-out-file=/tmp/massif.out /usr/sbin/mysqld –default-file=/etc/my.cnf Lets wait for 30mins (or till the mysql takes the whole memory). Then kill the Valgranid and start mysql as normal. Analyze the Log: /usr/local/bin/ms_print /tmp/massif.out We’ll explain mysql memory debugging using valgrind in an another blog. Memory Leak: We have verified all the mysql parameters and OS level things for the memory consumption. But no luck. So I started to think and search about mysql’s memory leak parts. Then I found this awesome blog by Todd. Yes, the only parameter I didn’t check is max_prepared_stmt_count. What is this? From MySQL’s Doc, This variable limits the total number of prepared statements in the server. It can be used in environments where there is the potential for denial-of-service attacks based on running the server out of memory by preparing huge numbers of statements. Whenever we prepared a statement, we should close in the end. Else it’ll not the release the memory which is allocated to it. For executing a single query, it’ll do three executions (Prepare, Run the query and close). There is no visibility that how much memory is consumed by a prepared statement. Is this the real root cause? Run this query to check how many prepared statements are running in mysql server. mysql> show global status like 'com_stmt%'; +-------------------------+-----------+| Variable_name | Value |+-------------------------+-----------+| Com_stmt_close | 0 || Com_stmt_execute | 210741581 || Com_stmt_fetch | 0 || Com_stmt_prepare | 199559955 || Com_stmt_reprepare | 1045729 || Com_stmt_reset | 0 || Com_stmt_send_long_data | 0 |+-------------------------+-----------+ You can see there are 1045729 prepared statements are running and the Com_stmt_close variables is showing none of the statements are closed. This query will return the max count for the preparements. mysql> show variables like 'max_prepared_stmt_count';+-------------------------+---------+| Variable_name | Value |+-------------------------+---------+| max_prepared_stmt_count | 1048576 |+-------------------------+---------+ Oh, its the maximum value for this parameter. Then we immediately reduced it to 2000. mysql> set global max_prepared_stmt_count=2000; -- Add this to my.cnfvi /etc/my.cnf [mysqld]max_prepared_stmt_count = 2000 Now, the mysql is running fine and the memory leak is fixed. Till now the memory utilization is normal. In Laravel framework, its almost using this prepared statement. We can see so many laravel + prepare statements questions in StackOverflow. Conclusion: The very important lesson as a DBA I learned is, before setting up any parameter value check the consequences of modifying it and make sure it should not affect the production anymore. Now the mysql side is fine, but the application was throwing the below error. Can't create more than max_prepared_stmt_count statements (current value: 20000) To continue about this series, the next blog post will explain how we fixed the above error using multiplexing and how it helped to dramatically reduce the mysql’s memory utilization. How max_prepared_stmt_count bring down the production MySQL system was originally published in Searce Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.
          64-bit mmap in 32-bit userspace      Cache   Translate Page      
We've got a 64-bit arm kernel and pretty much all of our userspace is 32-bit. The problem is that I'm getting a 64-bit address when I create a DRM DUMB framebuffer, so when I pass that to mmap (mmap2...
          How Can You Eat Raw Corn? Simple Tips and Tricks      Cache   Translate Page      
Corn, also known as maize, is one of the main staples of any diet—vegan, vegetarian, or omnivore. This ubiquitous cereal also comes with more than a few nutritional benefits. It is rich in potassium, iron, and contains 3,27g of protein per 100g. On top of that, corn is an excellent source of energy because 100g of kernels have about 86 calories. These characteristics make it an excellent choice for a vegan or vegetarian diet. There are many different ways to consume corn. It can be cooked, grilled, or ground into tasty tortilla flour. However, if you’re wondering if you can eat raw corn, here are some things you should know. Can You Eat Raw Corn? The Simple Answer If you grew up in the city, eating raw corn might not have been something you enjoyed as a kid. Yet those who grew up on a farm, especially in the Midwest, know well how tasty corn straight from the cob can be. However, you don’t just go out and munch down on any corn that you can find. There are two varieties of corn and one is perfectly suitable for eating raw while the other isn’t. Sweet corn is the variety […]
          How to create a folder inside proc/pid      Cache   Translate Page      
Hi , I am writing a character device with a Linux Kernel Module and the kernel version is : 4.14.74.. Basically, a user space process with a given tgid can interact with my device through ioctl. ...
          Updating a codec?      Cache   Translate Page      
I need to update a codec for an old kernel; specifically, updating the WM8753 to the WM8750 on kernel-2.6.35.14. The CODEC itself exists in the kernel. Other than adding the new codec into the .config...
          LXer: Canonical Releases Important Ubuntu Kernel Live Patch to Fix L1TF, SpectreRSB      Cache   Translate Page      
Published at LXer: Canonical released a new kernel live patch for all its supported Ubuntu Linux operating systems to address several critical security vulnerabilities discovered by various...
          Comentario en Kali Linux 2018.3 ya está aquí con novedades por Alfredo Alvarado      Cache   Translate Page      
Hola equipo DesdeLinux podrían elaborar una tutorial o un post acerca de como instalarlo en Maquinas Virtuales como VBox o VMware, ya que he visto infinidad de tutoriales y no puedo actualizar correctamente los repositorios, kernel, java, nodejs, etc... Al no tenerlos actualizados o en su versión en especifico ejecutar un programa, este me impide iniciar el programa en su interfaz gráfica. También mi antivirus detecta cierto archivos como virus y me los elimina. Se los agradecería, soy nuevo por aquí y me ha encantado su blog, ¡me suscribo!
          Internship- Product Development- VM Monitor Group - VMware - Palo Alto, CA      Cache   Translate Page      
VMware is a global leader in cloud infrastructure and business mobility. Excellent knowledge of OS kernel internals, including memory management, resource...
From VMware - Mon, 01 Oct 2018 19:00:23 GMT - View all Palo Alto, CA jobs
          15 CentOS Updates      Cache   Translate Page      
The following updates has been released for CentOS: CEBA-2018:2893 CentOS 6 gcc-libraries BugFix Update CEBA-2018:2894 CentOS 6 mailx BugFix Update CEBA-2018:2895 CentOS 6 libcgroup BugFix Update CEBA-2018:2896 CentOS 6 nfs-utils BugFix Update CEBA-2018:2897 CentOS 6 zsh BugFix Update CEBA-2018:2899 CentOS 6 ypserv BugFix Update CEBA-2018:2900 CentOS 6 dhcp BugFix Update CEBA-2018:2901 CentOS 6 device-mapper-multipath BugFix Update centos-release-7-5.1804.5.el7.centos update CESA-2018:2846 Important CentOS 6 kernel Security U...
          CentOS: CESA-2018-2846: Important CentOS 6 kernel       Cache   Translate Page      
LinuxSecurity.com: Upstream details at : https://access.redhat.com/errata/RHSA-2018:2846
          Flashed the Wrong image on the diag Partiton      Cache   Translate Page      
Hello I tryed to Debrick my Kindle Paperwhite, which had problems at starting and was stuck in a boot loop. I did flashed the the diags and main Partiton with an image from an PW2, at this point i did not verify which paperwhite model I have. Currently it stops at loading the Kernel, which is because it is the wrong Kindle model Code: --------- Quick Memory Test 0x80000000, 0x1fff0000 POST done in 111 ms BOOTMODE OVERRIDE: DIAGS Battery voltage: 3944 mV Hit any key to stop autoboot: 0 uboot > bootm 0xE41000 ## Booting kernel from Legacy Image at 80800000 ... Image Name: Linux-2.6.31-rt11-lab126 Image Type: ARM Linux Kernel Image (uncompressed) Data Size: 4608576 Bytes = 4.4 MB Load Address: 70008000 Entry Point: 70008000 Verifying Checksum ... OK Loading Kernel Image ... --------- Thank you in advance for any tipps to get my Kindle working again. Christian
          Sluggish replication; Linux Kernel 4.15.0-36 (1 reply)      Cache   Translate Page      
OS: Ubuntu 16.04.03 x64
MySQL: 5.7.23-community (MySQL ppa repo)

Anyone else running Ubuntu 16.04.3 with kernel 4.15.0-36-generic and having sluggish replication performance?

We have a MySQL farm of seven severs, all running Ubuntu on dedicated hardware. Ubuntu released kernel 4.15.0-36-generic last week and we rebooted this last weekend. Replication has been falling steadily behind since the reboot. There was no reason for it that we could see. The servers were not choaking under any load; network, storage, or CPU. Servers on the same network switch were up to hours behind in replication. This was something new. We have never really had a significant lag before - usually a minute or so was the worst we ever had. But hours?

There were no MySQL or OS configurations changes. All servers were updated to MySQL 5.7.23 on 2018-08-08 and have been running fine since that update.

The biggest indicator of the lag was "SLAVE RETRIEVED". It was way behind. "SLAVE EXECUTED" was keeping up but "SLAVE RETRIEVED" seem to not be downloading transactions from the upstream host with any urgency.

Long story short, we eliminated the physical network and concluded it had to be an OS update. We restarted one server using kernel 4.15.0-34-generic and the server caught up replication in just a couple of minutes!
          Human Resources Manager at Presco      Cache   Translate Page      
Presco is a fully-integrated agro-industrial establishment with oil palm plantations, palm oil mill, palm kernel crushing plant and vegetable oil refining and fractionation plant. It also has an olein and stearin packaging plant and a biogas plant to treat its palm oil mill effluent. It is the first of its kind in West Africa.Key Responsibilities Manage staff and coordinate required recruiting in accordance with the company's policies and applicable regulations and assisting senior management to: Outline key performance indicators for employees. Appraise employees' performance and guide professional development. Reward and disciplining employees. Address employee relations issues and other related human resources developments. Oversee and manage all HR contracts. Develop strong and effective team relationships within the company. Develop, manage and oversee the functional HR budget and manage expenses within budget allocations. Lead all activities related to employment, legislation, HR systems, practices, procedures, compliance, day-to-day development and HR initiatives. Ensure the provision of timely employee-related information to senior management as necessary. Provide advice, guidance and direct support on all aspects related to people management to senior management. Responsible for payroll functions as appropriate in the company. Qualifications / Experience B.Sc or HND in any Social Science discipline or any related field in Industrial Relations/Human Resources Management. Must be certified; SHRM-CP/SP or PHR CIPM HRPL Minimum of 5 years' experience as HR Generalist in well structured environment preferably manufacturing environment Knowledge of Nigerian employment legislation Excellent problem solving, judgment and decision-making skills Strong verbal and written communication skills and very good interpersonal skills Very high degree of discretion and confidentiality Good attention to detail Excellent knowledge of Microsoft Office Applications in particular MS Outlook, MS Word and MS Excel at an advanced standard.
          Entomologist at Presco      Cache   Translate Page      
Presco is a fully-integrated agro-industrial establishment with oil palm plantations, palm oil mill, palm kernel crushing plant and vegetable oil refining and fractionation plant. It also has an olein and stearin packaging plant and a biogas plant to treat its palm oil mill effluent. It is the first of its kind in West Africa.Location: Benin City, Edo Duties and Responsibilities Work with Phyto Technical Officer, Phyto team and Research and Development on all insect pest related monitoring research activities Keep records from all observation and trials Report on all insect pest matters to Research and Development and Agric. Department entomological and field experiments Contribute to the continuous improvement of the IPM All other tasks assigned by Management. Qualification Candidate must possess a minimum of B.Sc in Zoology or any related discipline.
          Chief Security Officer at Presco      Cache   Translate Page      
Presco is a fully-integrated agro-industrial establishment with oil palm plantations, palm oil mill, palm kernel crushing plant and vegetable oil refining and fractionation plant. It also has an olein and stearin packaging plant and a biogas plant to treat its palm oil mill effluent. It is the first of its kind in West Africa.Location: Benin City, Edo Qualifications/Experience Candidates applying for this position must be Ex-Service or Military Officers or possess relevant qualification. In addition: Applicant must possess minimum of ten years relevant work experience in similar position Must be computer literate Must be above 50 years Ensure, on a day-to-day basis, proper surveillance over life and property in the company and other designated places Provide professional advice to ensure that strategies adopted meet all internal and external services delivery requirement Be proactive in relation to identifying, managing crisis, resolving complex risks issues and mitigating activities in response to security related emergency situations Applicants must have in the past exhibited considerable degree of competence, responsiveness and demostrable integrity in security matters in a similar organization Must be in good health to be able to endure the rigours of the duties of a Chief Security Officer.
          L861 – HNR-CM13-PP      Cache   Translate Page      

Categories:

20181007 build HNR-CM13-PP - New Phone Management – Advanced LTE Services support - Audio Enhancements (Quality / Loudness) 20180805 build HNR-CM13-PP - Battery & Thermal Management (kernel + device tree fixes) - Volume Level fixes (higher gain) 20180705 build HNR-CM13-PP - Camera App/Framework Fixes & Additions : + fixed slowmotion Mode (working again in 1280×720) + added [...]

(Read more...)


          Human Resources Manager      Cache   Translate Page      

Presco is a fully-integrated agro-industrial establishment with oil palm plantations, palm oil mill, palm kernel crushing plant and vegetable oil refining and fractionation plant. It also has an olein and stearin packaging plant and a biogas plant to treat its palm oil mill effluent. It is the first of its kind in West Africa.


We are recruiting to fill the position below:

Job Position: Human Resources Manager
JOb Location:
Nigeria

          Help Wanted - Kernels - Markham, ON      Cache   Translate Page      
Now hiring at 5000 Highway 7,...
From Job Spotter - Sat, 01 Sep 2018 18:21:45 GMT - View all Markham, ON jobs
          BitCracker:BitLocker密码破解工具      Cache   Translate Page      

BitCracker是第一个开源的用于破解使用BitLocker加密存储设备(如硬盘,USB Pendrive,SD卡等)的工具。BitLocker是Windows Vista,7,8.1和10(Ultimate,Pro和Enterprise)上提供的加密功能。BitLocker提供了许多不同的身份验证方法来加密存储设备,如可信赖平台模块(TPM),智能卡,恢复密码,用户提供的密码。通过字典攻击,BitCracker会尝试找出正确的用户密码或恢复密码,来解密加密的存储设备。目前,已在CUDAOpenCL中实现。

注:在COMMID 7b2a6b6(CUDA版本)和 5f09d7f(OpenCL版本)中存在固定的严重错误:bad loop termination(循环终止错误)!可尝试重新运行解决。

运行环境

运行BitCracker-CUDA的最低要求如下:

CC 3.5或更高版本的 NVIDIA GPU

CUDA 7.5或更新版本

运行BitCracker-OpenCL的最低要求是,GPU或CPU支持OpenCL(查看帮助)

BitCracker至少需要260 MB的设备内存。

出于性能原因,我们强烈建议你在GPU上运行(具体请参阅性能部分)。

构建

运行build.sh脚本后,会在build目录中生成4个可执行文件:bitcracker_hash,bitcracker_rpgen,bitcracker_cuda,bitcracker_opencl。

为了构建与你的NVIDIA GPU和CUDA版本一致地bitcracker_cuda文件,你需要修改src_CUDA/Makefile,并选择正确的SM版本。对应可参考下表:

GPU 架构 建议的 CUDA Makefile
Kepler CUDA 7.5 arch=compute_35,code=sm_35
Maxwell CUDA 8.0 arch=compute_52,code=sm_52
Pascal CUDA 9.0 arch=compute_60,code=sm_60
Volta CUDA 9.0 arch=compute_70,code=sm_70

攻击准备

创建一个使用BitLocker加密的存储设备映像,使用dd命令示例:

sudo dd if=/dev/disk2 of=/path/to/imageEncrypted.img conv=noerror,sync
4030464+0 records in
4030464+0 records out
2063597568 bytes transferred in 292.749849 secs (7049013 bytes/sec)

然后,在imageEncrypted.img上运行bitcracker_hash可执行文件,以:

检查映像是否具有有效格式并且可以被BitCracker攻击

检查原始存储设备哈希是否已使用用户密码或恢复密码加密

提取映像的哈希描述

如果一切正常,bitcracker_hash将会生成1到2个输出文件:

hash_user_pass.txt:如果设备使用用户密码加密,则此文件包含启动用户密码攻击模式所需的哈希

hash_recv_pass.txt:启动Recovery Password攻击模式所需的哈希值

注:BDE加密卷可以针对不同的身份验证方法使用不同的格式。如果bitcracker_hash无法在你的加密映像上找到恢复密码,请与我联系。

示例:

/build/bitcracker_hash -o test_hash -i ./Images/imgWin7
---------> BitCracker Hash Extractor <---------
Opening file ./Images/imgWin7
....
Signature found at 0x02208000
Version: 2 (Windows 7 or later)
VMK entry found at 0x022080bc
VMK encrypted with user password found!
VMK encrypted with AES-CCM
VMK entry found at 0x0220819c
VMK encrypted with Recovery key found!
VMK encrypted with AES-CCM
User Password hash:
$bitlocker$0$16$89a5bad722db4a729d3c7b9ee8e76a29$1048576$12$304a4ac192a2cf0103000000$60$24de9a6128e8f8ffb97ac72d21de40f63dbc44acf101e68ac0f7e52ecb1be4a8ee30ca1e69fbe98400707ba3977d5f09b14e388c885f312edc5c85c2
Recovery Key hash:
$bitlocker$2$16$8b7be4f7802275ffbdad3766c7f7fa4a$1048576$12$304a4ac192a2cf0106000000$60$6e72f6ef6ba688e72211b8cf8cc722affd308882965dc195f85614846f5eb7d9037d4d63bcc1d6e904f0030cf2e3a95b3e1067447b089b7467f86688
Output file for user password attack: "hash_user_pass.txt"
Output file for recovery password attack: "hash_recv_pass.txt"

用户密码攻击

如果存储设备已使用用户提供的密码加密,则可以该类型的攻击,如下图所示。

687474703a2f2f6f70656e77616c6c2e696e666f2f77696b692f5f6d656469612f6a6f686e2f626974637261636b65725f696d67312e706e67.png

BitCracker执行字典攻击,需要你提供可能的用户密码列表。

执行攻击需要:

hash_user_pass.txt文件

可能的用户密码列表(需要你自己提供)

命令行示例:

./build/bitcracker_cuda -f hash_user_pass.txt -d wordlist.txt -t 1 -b 1 -g 0 -u

-f:hash_user_pass.txt文件存放路径

-d:爆破字典存放路径

-t:每个CUDA线程处理的密码数

-b:CUDA blocks的数量

-g:NVIDIA GPU设备ID

-u:指定你想要的用户密码攻击

注:查看所有可选项,可以通过./build/bitcracker_cuda -h命令。为了获得最佳性能,请参阅“性能”部分中的表格,并根据你的NVIDIA GPU正确设置t和b选项。

bitcracker_opencl可执行文件同上。

输出示例:

====================================
Selected device: GPU Tesla K80 (ID: 0)
====================================
....
Reading hash file "hash_user_pass.txt"
$bitlocker$0$16$0a8b9d0655d3900e9f67280adc27b5d7$1048576$12$b0599ad6c6a1cf0103000000$60$c16658f54140b3d90be6de9e03b1fe90033a2c7df7127bcd16cb013cf778c12072142c484c9c291a496fc0ebd8c21c33b595a9c1587acfc6d8bb9663
====================================
Attack
====================================
Type of attack: User Password
CUDA Threads: 1024
CUDA Blocks: 1
Psw per thread: 1
Max Psw per kernel: 1024
Dictionary: wordlist.txt
Strict Check (-s): No
MAC Comparison (-m): No
CUDA Kernel execution:
  Stream 0
  Effective number psw: 12
  Passwords Range:
    abcdefshhf
    .....
    blablalbalbalbla12
  Time: 28.651947 sec
  Passwords x second:     0.42 pw/sec
================================================
....
Password found: paperino
================================================

目前BitCracker能够处理长度在8到55个字符之间的输入密码。

恢复密码攻击

在加密存储设备期间,(无论身份验证方法如何)BitLocker会要求用户在某处存储恢复密码,该密码可用于在无法解锁驱动器的情况下恢复对加密存储设备的访问。恢复密码为48位的密钥,如下所示:

236808-089419-192665-495704-618299-073414-538373-542366

有关更多详细信息,请参阅Microsoft文档

至于用户密码,BitCracker能够执行字典攻击找到BitLocker生成的正确恢复密码来加密存储设备。注意,目前只有在存储设备未使用TPM加密时,才能执行恢复密码攻击。

执行攻击需要:

hash_recv_pass.txt文件

可能的恢复密码列表

生成并存储所有可能的密码并不容易。出于这个原因,我们创建了一个名为bitcracker_rpgen的恢复密码生成器。使用它,你可以创建一组可用于执行恢复密码攻击的字典列表。

示例:

./build/bitcracker_rpgen -n 300 -p 10000000 -s 000000-000011-000022-000033-000044-000055-008459-015180

-n:密码字典数量

-p:每个密码字典的恢复密码数量

-s:生成恢复密码的启始范围

你也可以使用不带选项的默认配置:

./build/bitcracker_rpgen
************* BitCracker Recovery Password wordlists generator *************
Running with this configuration:
### Create 100 wordlists
### Recovery Passwords per wordlist=5000000
### Allow duplicates=No
### Generate starting from=000000-000011-000022-000033-000044-000055-000066-000077
Creating wordlist "bitcracker_wlrp_0.txt" with 5000000 passwords
First password=000000-000011-000022-000033-000044-000055-000066-000077
Last password= 000000-000011-000022-000033-000044-000055-000902-217822

注:-s选项可让你接着上次生成的恢复密码继续生成密码,而不用再重复生成。-d选项允许你在同一个恢复密码中包含重复项。例如:000000-000011-000055-000055-000044-000055-000902-217822

命令行示例:

./build/bitcracker_cuda -f hash_recv_pass.txt -d bitcracker_wlrp_0.txt -t 1 -b 1 -g 0 -r

输出示例:

====================================
Selected device: GPU Tesla K80 (ID: 0)
====================================
...
Reading hash file "hash_recv_pass.txt"
$bitlocker$2$16$432dd19f37dd413a88552225628c8ae5$1048576$12$a0da3fc75f6cd30106000000$60$3e57c68216ef3d2b8139fdb0ec74254bdf453e688401e89b41cae7c250739a8b36edd4fe86a597b5823cf3e0f41c98f623b528960a4bee00c42131ef
====================================
Attack
====================================
Type of attack: Recovery Password
CUDA Threads: 1024
CUDA Blocks: 1
Psw per thread: 8
Max Psw per kernel: 8192
Dictionary: wordlist.txt
Strict Check (-s): No
MAC Comparison (-m): No
CUDA Kernel execution:
  Effective passwords: 6014
  Passwords Range:
    390775-218680-136708-700645-433191-416240-153241-612216
    .....
    090134-625383-540826-613283-563497-710369-160182-661364
  Time: 193.358937 sec
  Passwords x second:    31.10 pw/sec
================================================
CUDA attack completed
Passwords evaluated: 6014
Password found: 111683-110022-683298-209352-468105-648483-571252-334455
================================================

误报

默认情况下,BitCracker会对用户和恢复密码模式进行快速攻击,这可能会导致一些误报。想要降低误报率,你可以使用-m选项重新运行攻击。该选项将启用MAC验证(缺点是速度将会变慢许多)。

示例

以下是我们为大家提供的几个加密存储设备的映像:

imgWin7: BitLocker on Windows 7 Enteprise edition OS

imgWin8: BitLocker on Windows 8 Enteprise edition OS

imgWin10Compat.vhd: BitLocker (compatible mode) on Windows 10 Pro edition OS

imgWin10NotCompat.vhd: BitLocker (not compatible mode) on Windows 10 Pro edition OS

imgWin10NotCompatLongPsw.vhd : BitLocker (not compatible mode) on Windows 10 Pro edition OS with a longer user password

你可以使用存储在“Dictionary”文件夹中的密码字典,使用用户和恢复密码模式来攻击这些映像。

性能

以下我们报告了在用户密码(-u选项)快速攻击(默认)的情况下BitCracker的最佳性能。

GPU Acronim GPU Arch CC # SM Clock CUDA
GFT GeForce Titan Kepler 3.5 14 835 7.0
GTK80 Tesla K80 Kepler 3.5 13 875 7.5
GFTX GeForce Titan X Maxwell 5.2 24 1001 7.5
GTP100 Tesla P100 Pascal 6.1 56 1328 8.0
GTV100 Tesla V100 Volta 7.0 80 1290 9.0
AMDM Radeon Malta - - - - -

性能:

Version GPU -t -b Passwords x kernel Passwords/sec Hash/sec
CUDA GFT 8 13 106.496 303 635 MH/s
CUDA GTK80 8 14 114.688 370 775 MH/s
CUDA GFTX 8 24 106.608 933 1.957 MH/s
CUDA GTP100 1 56 57.344 1.418 2.973 MH/s
CUDA GTV100 1 80 81.920 3.252 6.820 MH/s
OpenCL AMDM 32 64 524.288 241 505 MH/s
OpenCL GFTX 8 24 196.608 884 1.853 MH/s

John The Ripper

我们在John The Ripper中发布了BitCracker作为OpenCL-BitLocker格式(–format=bitlocker-opencl)。bitcracker_hash生成的哈希文件与John格式完全兼容。

在GTV100上,密码率约为3150p/s。JtR团队开发了这种攻击的CPU版本(–format=bitlocker); 在CPU Intel(R) Xeon(R) v4 2.20GHz上,密码率约为78p/s。

Hashcat

正在开发中~

计划

实现多GPU

提供Qt界面

最后,感谢John The Ripper团队,Dislocker和LibBDE项目的支持。希望大家能积极的分享并测试我们的项目,并第一时间将问题反馈给我们!

*参考来源:GitHub,FB小编 secist 编译,转载请注明来自FreeBuf.COM


          分割算法总结 - 开发者头条      Cache   Translate Page      

周末应该是一个好好休息的时间,但是一定会有在默默努力科研的你,由于最近是开学季,很多关注的朋友一直会问“计算机视觉战队平台有基础性的内容吗?”,今天我和大家说一次,我们平台之前有推送很多基础的知识,有兴趣的或者是刚刚接触CV&DL的你,可以去历史消息阅读,在这也感谢所有一直关注和支持我们的您!

今天其实是一个重要的日子!不告诉大家什么事情,但是我会把自己喜悦的心情与大家分享,接下来就和大家说说目标分割的事吧~

image

分割其实在很多领域是非常重要的研究对象,现在也有很多研究者在该领域大展身手,比如何大神,一直在该方面的做的最优秀之一,今天就基于他CVPR 2018的一篇优秀Paper说起。

01 概述

大多数目标实例分割的方法都要求所有的训练样本带有segmentation masks。这种要求就使得注释新类别的开销很大,并且将实例分段模型限制为∼100注释良好的类。

本次技术目的是提出一种新的部分监督的训练模式,该模式具有一种新的权重传递函数,结合一种新的权重传递函数,可以在一大组类别上进行训练实例分割模型,所有这些类别都有框注释,但只有一小部分有mask注释。这些设计允许我们训练MASK R-CNN,使用VisualGenome数据集的框注释和COCO数据集中80个类的mask注释来检测和分割3000种视觉概念。

最终,在COCO数据集的对照研究中评估了提出的方法。这项工作是迈向对视觉世界有广泛理解的实例分割模型的第一步。

在正式细说本次分割技术之前,还是简单说下分割的事,有一个简单的引言和大家分享下,没有兴趣的您可以直接跳过,阅读关键技术部分,谢谢!

目标检测器已经变得更加精确,并获得了重要的新功能。最令人兴奋的是能够预测每个检测到的目标前景分割mask,这是一个称为instance segmentation的任务。在实践中,典型的instance segmentation系统仅限于仅包含大约100个目标类别的广阔视觉世界的一小部分。

语义分割其实就是对图片的每个像素都做分类。其中,较为重要的语义分割数据集有:VOC2012 以及 MSCOCO

传统机器学习方法:如像素级的决策树分类,参考TextonForest以及Random Forest based classifiers。再有就是深度学习方法。

深度学习最初流行的分割方法是,打补丁式的分类方法 (patch classification) 。逐像素地抽取周围像素对中心像素进行分类。由于当时的卷积网络末端都使用全连接层 (full connected layers) ,所以只能使用这种逐像素的分割方法。

但是到了2014年,来自伯克利的Fully Convolutional Networks(FCN)【点击蓝色,有链接直接可以阅读全卷积网络相关资料】卷积网络,去掉了末端的全连接层。随后的语义分割模型基本上都采用了这种结构。除了全连接层,语义分割另一个重要的问题是池化层。池化层能进一步提取抽象特征增加感受域,但是丢弃了像素的位置信息。但是语义分割需要类别标签和原图像对齐,因此需要从新引入像素的位置信息。有两种不同的架构可以解决此像素定位问题。

  • 第一种是编码-译码架构。编码过程通过池化层逐渐减少位置信息、抽取抽象特征;译码过程逐渐恢复位置信息。一般译码与编码间有直接的连接。该类架构中U-net 是最流行的。

  • 第二种是膨胀卷积 (dilated convolutions) 【这个核心技术值得去阅读学习】,抛弃了池化层。使用的卷积核如下图所示:

image

居然都说到这里,那我继续来简单说一些相关的文献吧。

按时间顺序总结,大概我能总结9篇paper,看语义分割的结构是如何演变的。分别有FCNSegNet U-NetDilated ConvolutionsDeepLab (v1 & v2)RefineNet PSPNetLarge Kernel MattersDeepLab v3

参考文章:(“计算机视觉战队”微信公众平台推送)

1)FCN 2014年

image

主要的贡献:

  • 为语义分割引入了 端到端 的卷积网络,并流行开来

  • 重新利用 ImageNet 的预训练网络用于语义分割

  • 使用 反卷积层 进行上采样

  • 引入跳跃连接来改善上采样粗糙的像素定位

比较重要的发现是,分类网络中的全连接层可以看作对输入的全域卷积操作,这种转换能使计算更为高效,并且能重新利用ImageNet的预训练网络。经过多层卷积及池化操作后,需要进行上采样,FCN使用反卷积(可学习)取代简单的线性插值算法进行上采样。

2)SegNet 2015年

image

编码-译码架构

主要贡献:将池化层结果应用到译码过程。引入了更多的编码信息。使用的是pooling indices而不是直接复制特征,只是将编码过程中 pool 的位置记下来,在 uppooling 是使用该信息进行 pooling 。

3)U-Net 2015

U-Net有更规整的网络结构,通过将编码器的每层结果拼接到译码器中得到更好的结果。

image

4)Dilated Convolutions 2015年

通过膨胀卷积操作聚合多尺度的信息

image

主要贡献:

池化在分类网络中能够扩大感知域,同样降低了分辨率,所以提出了膨胀卷积层。

image

5)DeepLab (v1 & v2) 2014 & 2016

“计算机视觉战队”微信公众平台推送过,可以查阅。

6)RefineNet 2016年

image

image

主要贡献:

膨胀卷积有几个缺点,如计算量大、需要大量内存。这篇文章采用编码-译码架构。编码部分是ResNet-101模块。译码采用RefineNet模块,该模块融合了编码模块的高分辨率特征和前一个RefineNet模块的抽象特征。每个RefineNet模块接收多个不同分辨率特征,并融合。

7)PSPNet 2016年

Pyramid Scene Parsing Network 金字塔场景解析网络

image

主要贡献:

金字塔池化模块通过应用大核心池化层来提高感知域。使用膨胀卷积来修改ResNet网,并增加了金字塔池化模块。金字塔池化模块对ResNet输出的特征进行不同规模的池化操作,并作上采样后,拼接起来,最后得到结果。

本文提出的网络结构简单来说就是将DeepLab(不完全一样)aspp之前的feature map pooling了四种尺度之后将5种feature map concat到一起经过卷积最后进行prediction的过程。

8)Large Kernel Matters 2017

image

主要贡献:

理论上更深的ResNet能有很大的感知域,但研究表明实际上提取的信息来自很小的范围,因此使用大核来扩大感知域。但是核越大,计算量越大,因此将k x k的卷积近似转换为1 x k + k x 1和k x 1 + 1 x k卷积的和,称为GCN

本文的架构是:使用ResNet作为编译器,而GCN和反卷积作为译码器。还使用了名为Boundary Refinement的残余模块。

9)DeepLab v3 2017

image

主要贡献:

和DeepLab v2一样,将膨胀卷积应用于ResNet中。改进的ASPP指的是将不同膨胀率的膨胀卷积结果拼接起来,并使用了BN 。与Dilated convolutions (2015) 不一样的是,v3直接对中间的特征图进行膨胀卷积,而不是在最后做。

小总结:

image

现在把之前较为典型的简单介绍了一遍,现在接下来我们继续说今天这个分割技术。

02 学习分割Every Thing

—————————

C是一组目标类别,希望为其训练一个instance segmentation模型。大多数现有方法假设C中的所有训练样本都带有instance mask

于是,本次放宽了这一要求,而是假设C=A∪B,其中来自A中类别的样本有mask,而B中的只有边界框。由于B类的样本是弱标记的w.r.t.目标任务(instance segmentation),将强标签和弱标签组合的训练作为一个部分监督的学习问题。注意到可以很容易地将instance mask转换为边界框,假设边界框注释也适用于A中的类。

给出了一个包含边界框检测组件和mask预测组件的MASK R-CNN instance segmentation模型,提出了MaskX R-CNN方法,该方法将特定类别的信息从模型的边界框检测器转移到其instance mask预测器。

本方法是建立在Mask R-CNN,因为它是一个简单的instance segmentation模型,也取得了最先进的结果。简单地说,MASK R-CNN可以被看作是一个更快的R-CNN边界框检测模型,它有一个附加的mask分支,即一个小的全卷积网络(FCN)。

在推理时,将mask分支应用于每个检测到的对象,以预测instance-level的前景分割mask。在训练过程中,mask分支与Faster R-CNN中的标准边界框head并行训练。在Mask R-CNN中,边界框分支中的最后一层和mask分支中的最后一层都包含特定类别的参数,这些参数分别用于对每个类别执行边界框分类和instance mask预测。与独立学习类别特定的包围框参数和mask参数不同,我们建议使用一个通用的、与类别无关的权重传递函数来预测一个类别的mask参数,该函数可以作为整个模型的一部分进行联合训练。

具体如下如所示:

image

在训练期间,假设对于A和B两组类,instance mask注释仅适用于A中的类,而不适用于B中的类,而A和B中的所有类都有可用的边界框注释。如上图所示,我们使用A∪B中所有类的标准框检测损失来训练边界框head,但只训练mask head和权重传递函数T(·),在A类中使用mask loss,考虑到这些损失,我们探索了两种不同的训练过程:分阶段训练端到端训练

分阶段训练

由于Mask R-CNN可以被看作是用mask head增强Faster R-CNN,一种可能的训练策略是将训练过程分为检测训练(第一阶段)和分割训练(第二阶段)。

在第一阶段,只使用A∪B中类的边界框注释来训练一个Faster R-cnn,然后在第二阶段训练附加的mask head,同时保持卷积特征和边界框head的固定。这样,每个c类的类特定检测权重wc可以被看作是在训练第二阶段时不需要更新的固定类emdet层叠向量。

该方法具有很好的实用价值,使我们可以对边界框检测模型进行一次训练,然后对权重传递函数的设计方案进行快速评估。它也有缺点,这是我们接下来要讨论的。

端到端联合训练

结果表明,对于MASK R-CNN来说,多任务训练比单独训练更能提高训练效果。上述分阶段训练机制将检测训练和分割训练分开,可能导致性能低下。

因此,我们也希望以一种端到端的方式,联合训练边界框head和mask head。原则上,可以直接使用A∪B中类的box损失和A中类的mask loss来进行反向传播训练,但是,这可能导致A组和B组之间的类特定检测权重Wc的差异,因为只有c∈A的Wc会通过权重传递函数T(·)从mask loss得到梯度。

我们希望Wc在A和B之间是均匀的,这样在A上训练的预测Wc=T(Wc;θ)可以更好地推广到B。

03 实验

——

表1 Ablation on input to T

image

表2 Ablation on the structure of T

image

表3 Impact of the MLP mask branch

image

表4 Ablation on the training strategy

image

image

Each point corresponds to our method on a random A/Bsplit of COCO classes.

效果图

image

Mask predictions from the class-agnostic baseline (top row) vs. our MaskX R-CNN approach (bottom row). Green boxes are classes in set A while the red boxes are classes in set B. The left 2 columns are A = {voc} and the right 2 columns are A = {non-voc}.

image

image

Example mask predictions from our MaskX R-CNN on 3000 classes in Visual Genome. The green boxes are the 80 classes that overlap with COCO (set A with mask training data) while the red boxes are the remaining 2920 classes not in COCO (set B without mask training data). It can be seen that our model generates reasonable mask predictions on many classes in set B. See §5 for details.

如果想加入我们“计算机视觉战队”,请扫二维码加入学习群,我们一起学习进步,探索领域中更深奥更有趣的知识!

image


          Unlikely Outcomes      Cache   Translate Page      
In a piece at the New York Times Nate Cohn reviews some of the possible outcomes in the midterm elections. I recommend you read the whole thing but here is the kernel of his piece: The possibility that the Kavanaugh nomination is helping Republicans in Republican-leaning areas is important because the fight for control of […]
          Hasta Windows 10 Mobile ve llegar una nueva Build en la oleada de actualizaciones que ha lanzado Microsoft      Cache   Translate Page      

Hasta Windows 10 Mobile ve llegar una nueva Build en la oleada de actualizaciones que ha lanzado Microsoft#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Mientras estamos a la espera de una solución que acabe con los problemas generados por Windows 10 October 2018 Update, esos que han eliminado archivos de la carpeta "Mis Documentos", la actividad sigue en Microsoft. Ayer vimos cómo alertaban que están trabajando en corregir los problemas y avisaban a los usuarios afectados que lo mejor era no tocar sus equipos.

Pero si en tu caso todo ha ido cómo la seda y ya estás trabajando con Windows 10 en la versión 1809, quizás te interesa saber que los de Redmond cuentan en el mercado con Una nueva actualización acumulativa. Se trata de la Build 17763.55, la cual se ha lanzado hace unas horas.

Esta acumulativa está destinada para aquellos que ya tengan instalado Windows 10 October 2018 Update, la que viene a ser la versión 1809. Una Build que llega con el código de parche KB4464330 y que no incluye novedades destacables:

  • Se ha solucionado un problema de vencimiento de la política de grupo que afectaba a un cálculo incorrecto en la temporización que podía eliminar prematuramente perfiles en dispositivos sujetos a la condición “Eliminar perfiles de usuario anteriores a un número especificado de días.”
  • La segunda mejora viene relacionada con la seguridad y las actualizaciones para Windows Kernel, Microsoft Graphics Component, Microsoft Scripting Engine, Internet Explorer, Windows Storage and Filesystems, Windows Linux, Windows Wireless Networking, Windows MSXML, the Microsoft JET Database Engine, Windows Peripherals, Microsoft Edge, Windows Media Player e Internet Explorador.

Sólo dos puntos a destacar, por lo que tendremos que esperar a otra nueva actualización para ver si corrigen los verdaderos problemas existentes con Windows 10 October 2018 Update.

Si tienes Windows 10 1809 puedes descargarla acudiendo al "Menú de Configuración" y buscar "Actualización y seguridad" para después pulsar en "Buscar actualizaciones".

Junto a esta Build han lanzado otras dos actualizaciones acumulativas para sus equipos de sobremesa y tabletas. Por un lado una Build que llega para los usuarios que siguen en sus equipos con Windows 10 April 2018 Update y que se corresponde con el número de versión 17134.345. Está destinada a añadir actualizaciones de seguridad del Kernel, componentes gráficos, Edge,

Por otro lado para los que tienen aún instalado Windows 10 Fall Creators Update o lo que es lo mismo, Windows 10 en la versión 1709, lanzan la Build 16299.726 (parche KB4462918) destinada a añadir mejoras seguridad, las mismas que llegan a los que aún están usando Windows 10 Creators Update, la versión 1703, usuarios que reciben la Build 15063.1387 para añadir actualizaciones de seguridad.

También para Windows 10 Mobile

Captura 2018 10 10 A Las 7 45 22#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Pero no acaban aquí las novedades, pues aunque parezca mentira Windows 10 Mobile ha recibido una nueva actualización. No esperes novedades eso sí, pues la plataforma ya está en fase terminal. Se trata de la Build 15254.53, que se corresponde con el parche de seguridad KB4464853. Con esta actualización acumulativa se busca mejorar la seguridad del sistema en algunos de sus componentes.

  • Actualizaciones de seguridad para Internet Explorer, Windows Media Player, Microsoft Graphics Component, Microsoft Edge, Windows Kernel, Windows Storage and Filesystems, y Microsoft Scripting Engine

En este caso para descargar la Build debes acudir a la ruta "Configuración", "Actualización y Seguridad" y entrar en "Windows Update" para comprobar la disponibilidad de la actualización.

Más información |Soporte Microsoft

También te recomendamos

Es hora de actualizar tu PC: Microsoft libera una actualización acumulativa para Windows 10 April 2018 Update

Lavadora con función vapor: qué es y cómo debe utilizarse

Windows 10 y Windows 10 Mobile se actualizan con nuevas Builds pero sólo el primero tiene un futuro color de rosa

-
La noticia Hasta Windows 10 Mobile ve llegar una nueva Build en la oleada de actualizaciones que ha lanzado Microsoft fue publicada originalmente en Xataka Windows por Jose Antonio .


          Kunnen Linux-ontwikkelaars de GPLv2 intrekken als ze het met de Code of Conduct oneens zijn?      Cache   Translate Page      

Door ICT-jurist Arnoud Engelfriet.

Herrie in de Linux-tent: een nieuwe Code of Conduct in het project heeft veel ophef veroorzaakt, inclusief dreigementen dat men bijdragen aan de kernel gaat intrekken. Deze zijn door vrijwilligers ingebracht onder de GPLv2 open source licentie, maar volgens partijen die het oneens zijn met de Code is het nu mogelijk deze licentie weer in... Lees verder

Het bericht Kunnen Linux-ontwikkelaars de GPLv2 intrekken als ze het met de Code of Conduct oneens zijn? verscheen eerst op Ius Mentis.


          Comment on Sony PS4 Orbis Emulators by HYPERTiZ      Cache   Translate Page      
This but not built - still WIP - https://github.com/AlexAltea/orbital Kernel stuff etc apparently.
          Website Development In UK at Affordable Prices      Cache   Translate Page      
all directions are executed with root benefits). You see this message when the/framework segment on your gadget is a perused just filesystem (eg: SquashFS). To deal with this case Framaroot endeavor to utilize a trap by including "ro.kernel.qemu=1" in document/information/local.prop download from here
          Software Developer - Wafer Space Semiconductor Technologies Private Limited - Bengaluru, Karnataka      Cache   Translate Page      
Must have experience with Architecture level with multiple SW technology development and e2e product integration and product scope(kernel, MW, android framework...
From Monster IN - Tue, 09 Oct 2018 14:34:36 GMT - View all Bengaluru, Karnataka jobs
          A Fourier extension based numerical integration scheme for fast and high-order approximation of convolutions with weakly singular kernels. (arXiv:1810.03835v1 [math.NA])      Cache   Translate Page      

Authors: Akash Anand, Awanish Kumar Tiwari

Computationally efficient numerical methods for high-order approximations of convolution integrals involving weakly singular kernels find many practical applications including those in the development of fast quadrature methods for numerical solution of integral equations. Most fast techniques in this direction utilize uniform grid discretizations of the integral that facilitate the use of FFT for $O(n\log n)$ computations on a grid of size $n$. In general, however, the resulting error converges slowly with increasing $n$ when the integrand does not have a smooth periodic extension. Such extensions, in fact, are often discontinuous and, therefore, their approximations by truncated Fourier series suffer from Gibb's oscillations. In this paper, we present and analyze an $O(n\log n)$ scheme, based on a Fourier extension approach for removing such unwanted oscillations, that not only converges with high-order but is also relatively simple to implement. We include a theoretical error analysis as well as a wide variety of numerical experiments to demonstrate its efficacy.


          Fractional Diffusion Maps. (arXiv:1810.03952v1 [math.CA])      Cache   Translate Page      

Authors: Harbir Antil, Tyrus Berry, John Harlim

In locally compact, separable metric measure spaces, heat kernels can be classified as either local (having exponential decay) or nonlocal (having polynomial decay). This dichotomy of heat kernels gives rise to operators that include (but are not restricted to) the generators of the classical Laplacian associated to Brownian processes as well as the fractional Laplacian associated with $\beta$-stable L\'evy processes. Given embedded data that lie on or close to a compact Riemannian manifold, there is a practical difficulty in realizing this theory directly since these kernels are defined as functions of geodesic distance which is not directly accessible unless if the manifold (i.e., the embedding function or the Riemannian metric) is completely specified. This paper develops numerical methods to estimate the semigroups and generators corresponding to these heat kernels using embedded data that lie on or close to a compact Riemannian manifold (the estimators of the local kernels are restricted to Neumann functions for manifold with boundary). For local kernels, the method is basically a version of the diffusion maps algorithm which estimates the Laplace-Beltrami operator on compact Riemannian manifolds. For non-local heat kernels, the diffusion maps algorithm must be modified in order to estimate fractional Laplacian operators using polynomial decaying kernels. In this case, the graph distance is used to approximate the geodesic distance with appropriate error bounds. Numerical experiments supporting these theoretical results are presented. For manifolds with boundary, numerical results suggest that the proposed fractional diffusion maps framework implemented with non-local kernels approximates the regional fractional Laplacian.


          Remove all unused kernels with apt-get      Cache   Translate Page      
$ aptitude remove $(dpkg -l|egrep '^ii linux-(im|he)'|awk '{print $2}'|grep -v `uname -r`)
This should do the same thing and is about 70 chars shorter.

commandlinefu.com

Diff your entire server config at ScriptRock.com


          Microsoft releases Windows 10 builds 17763.55, 17134.345 - here's what's new      Cache   Translate Page      

Microsoft releases Windows 10 builds 17763.55, 17134.345 - here's what's new

Today is the second Tuesday of the month, making it Patch Tuesday, the day that Microsoft releases updates for all supported versions of windows 10. That actually includes the October 2018 Update, even though it was pulled from Windows Update just a few days after it was released .

But if you did update before Windows 10 version 1809 was pulled, you'll see KB4464330 , and that brings the build number to 17763.55. You can manually download it here , and it contains the following fixes:

Addresses an issue affecting group policy expiration where an incorrect timing calculation may prematurely remove profiles on devices subject to the "Delete user profiles older than a specified number of day.”

Security updates to Windows Kernel, Microsoft Graphics Component, Microsoft Scripting Engine, Internet Explorer, Windows Storage and Filesystems, Windows linux, Windows Wireless Networking, Windows MSXML, the Microsoft JET Database Engine, Windows Peripherals, Microsoft Edge, Windows Media Player, and Internet Explorer.

It doesn't address the issue that caused files to be deleted upon upgrading.

Those on Windows 10 version 1803, or the April 2018 Update, will get KB4462919 , bringing the build number to 17134.345. It can be manually downloaded here , and it contains the following fixes:

Security updates to Internet Explorer, Windows Media Player, Microsoft Graphics Component, Windows Peripherals, Windows Shell, Windows Kernel, Windows Datacenter Networking, Windows Storage and Filesystems, Microsoft Edge, Microsoft Scripting Engine, Windows Linux, and the Microsoft JET Database Engine.

Those on the Windows 10 Fall Creators Update, or version 1709, will get KB4462918 , and that brings the build number to 16299.726. You can manually download it here , and it contains the following security fixes:

Security updates to Internet Explorer, Windows Media Player, Microsoft Graphics Component, Windows Shell, Windows Kernel, Windows Datacenter Networking, Windows Storage and Filesystems, Microsoft Scripting Engine, and the Microsoft JET Database Engine .

There's also a known issue to be aware of:

Symptom Workaround

After installing this update, some users may see a dialog box with a non-applicable message beginning with the words “Hosted by...” when first starting Microsoft Edge.

This dialog will only appear once if they have turned on “Block only third-party cookies” in Microsoft Edge and applied certain language packs before installing this update.

Dismiss the dialog box.

Microsoft is working on a resolution and will provide an update in an upcoming release.

Those on Windows 10 version 1703, or the Creators Update, will get their final cumulative update today, as the version will no longer be supported. Those users will get KB4462937 , and it brings the build number to 15063.1387. It can be manually downloaded here , and it also contains only security improvements:

Security updates to Internet Explorer, Windows Media Player, Microsoft Graphics Component, Microsoft Edge, Windows Kernel, Windows Storage and Filesystems, and Microsoft Scripting Engine.

The same known issue applies.

Those on the Windows 10 version 1607 LTSC release or Windows Server 2016 will get KB4462917 , and that brings the build number to 14393.2551. You can manually download it here , and it contains the following security fixes:

Security updates to Internet Explorer, Windows Media Player, Microsoft Graphics Component, Microsoft Edge, Windows Kernel, Windows Datacenter Networking, Microsoft Scripting Engine, Microsoft JET Database Engine, and Windows Storage and Filesystems.

There is also a known issue to be aware of:

Symptom

Workaround

After installing this update, installing Window Server 2019 Key Management Service (KMS) host keys (CSVLK) on Window Server 2016 KMS hosts does not work as expected. Microsoft is working on a resolution and will provide an update in an upcoming release.

If you're still on the original version of Windows 10 as a Long-Term Servicing Channel customer, you'll get KB4462922 , bringing the build number to 10240.18005. You can manually download it here , and it contains the following fixes:

Addresses an issue that returns Windows Explorer to the parent folder when new files or folders are created within a child folder of a hidden parent folder.

Addresses an issue that makes it impossible to disable TLS 1.0 and TLS 1.1 when the Federal Information Processing Standard (FIPS) mode is enabled.

Addresses an issue in which all Guest Virtual Machines running Unicast NLB fail to respond to NLB requests after the Virtual Machines restart.

Security updates to Internet Explorer, Windows Media Player, Microsoft Graphics Component, Windows Datacenter Networking, Windows Storage and Filesystems, Microsoft Scripting Engine, and Windows Kernel.

As always, you don't have to manually install these updates if you want to. Windows Update will install them automatically.


          What’s New in Windows 10 Cumulative Update KB4462919      Cache   Translate Page      

The October 2018 Patch Tuesday rollout has started, and obviously, windows 10 is getting new cumulative updates for all versions on the market.

Keep in mind that although Windows 10 Home and Pro no longer receive updates unless you are running Creators Update or newer (this is actually the last month Creators Update gets patches on these SKUs), both Enterprise and Education are supported as part of LTSC.

Windows 10 version 1803 (April 2018 Update) is getting cumulative update KB4462919, and the first thing you’ll observe is that it bumps the OS build number to 17134.345.

What’s new in this update?

A cumulative update published on Patch Tuesday is first and foremost about fixing security vulnerabilities in Windows 10 and several OS components, and the changelog does not include any other fixes and refinements this time, though there’s a chance they’re there.

While you can read the full release notes in the box after the jump, it’s worth highlighting that security improvements include patches for Microsoft Edge andInternet Explorer browsers, the Windows Scripting engine, the kernel, and the Graphics engine. Microsoft says the following:

“Security updates to Internet Explorer, Windows Media Player, Microsoft Graphics Component, Windows Peripherals, Windows Shell, Windows Kernel, Windows Datacenter Networking, Windows Storage and Filesystems, Microsoft Edge, Microsoft Scripting Engine, Windows linux, and the Microsoft JET Database Engine.”

You can download cumulative update KB4462919 from Windows Update right now if you’re running the April 2018 Update. We haven’t seen any indication that it may cause issues or fail to install, and the update deployed correctly on the first systems at Softpedia. Delta updates are also available if you just want to deploy the latest changes on a fully up-to-date system.

We’ll continue to monitor reports for possible installation struggles, so check back soon to see if any have been encountered.


          Microsoft Launches KB4464330, First Windows 10 Version 1809 Cumulative Update      Cache   Translate Page      

Microsoft has just released windows 10 cumulative update KB4464330, the first-ever such update for the newly-released October 2018 Update.

Also known as version 1809, this new OS feature update is no longer available for download with a manual request via Windows Update due to bugs leading to data loss.

However, this cumulative update is specifically aimed at users who managed to install it during the days when it was available, but also for the first systems getting it from Windows Update as an automatic rollout.

Beginning today, Microsoft offers Windows 10 October 2018 Update to computers in waves, and more devices will get it as the company ensures smooth upgrade experience.

Windows 10 cumulative update comes with just two changes, one of which concerns the security improvements that Microsoft has included in the Patch Tuesday rollout. The Windows kernel, Windows Peripherals component, and the Microsoft Graphics Component are all getting improvements.

No known issue

There’s also a second change and Microsoft says it has fixed a bug in the October 2018 Update happening group policies.

“Addresses an issue affecting group policy expiration where an incorrect timing calculation may prematurely remove profiles on devices subject to the "Delete user profiles older than a specified number of day,’” Microsoft explains.

Cumulative update KB4464330 increases the OS build number to 17763.55. There are no known issues, which means that all devices should be able to install it correctly without any problem.

The update has already installed on our testing systems, and by the looks of it, Microsoft has made substantial improvements to the updating experience. Everything completed in just a couple of minutes with no issue whatsoever.

We’ll continue to keep an eye on reports to see how the experience with this first version 1809 cumulative update turns out to be, and in the meantime you’re recommended to begin patching too.


          Debian GNU/Linux 9 "Stretch" Gets New Kernel Patch to Fix Two Security Flaws      Cache   Translate Page      
The Debian Project published a new linux security advisory to inform users of the Debian GNU/Linux 9 "Stretch" operating system series about a new kernel security patch that fixes two vulnerabilities.
          DeLock USB 3.0 kártyaolvasó CFast 2.0 - Jelenlegi ára: 18 037 Ft      Cache   Translate Page      
4043619916863
lásd részletek
Leírás
Ez a Delock kártyaolvasó a notebook vagy PC egyik szabad USB-aljzatához csatlakoztatható. Lehetővé teszi a CFast memóriakártya olvasását és írását.
Műszaki adatok
? Csatlakozó:
  1 x Micro USB 3. 0 B típusú hüvely >
  1 x CFast 2. 0-nyílás
? Alumínium ház
? CFlast type I / II memóriakártyák támogatása
? Lefele kompatibilis CFast 1. 0
? Akár 5 Gb/s sebességű adatátvitel
? Méretek (H x Sz x M): kb. 82 x 60 x 12 mm
? Hot Swap, Plug & Play
Rendszerkövetelmények
? Windows XP/Vista/7/7-64/8/8-64/8. 1/8. 1-64, Mac 10. 7, 10. 8, Linux Kernel 3. 13
? PC vagy notebook számítógép szabad USB A típusú-csatlakozóval

DeLock USB 3.0 kártyaolvasó   CFast 2.0
Jelenlegi ára: 18 037 Ft
Az aukció vége: 2099-01-01 00:00
          Help Wanted - Kernels - Markham, ON      Cache   Translate Page      
Now hiring at 5000 Highway 7,...
From Job Spotter - Sat, 01 Sep 2018 18:21:45 GMT - View all Markham, ON jobs
          LXer: Debian GNU/Linux 9 "Stretch" Gets New Kernel Patch to Fix Two Security Flaws      Cache   Translate Page      
Published at LXer: The Debian Project published a new linux security advisory to inform users of the Debian GNU/Linux 9 "Stretch" operating system series about a new kernel security patch that...
          Speeding up a function inside RIFFA PCIe linux driver      Cache   Translate Page      
Any ways to make this function runs faster in kernel ? https://github.com/promach/riffa/blob/full_duplex/driver/linux/riffa_driver.c#L330 this push_circ_queue() function...
          Comment on San Antonio Wants to Limit Gun Stores to ‘Fight Gun Violence’ by Sprocket      Cache   Translate Page      
Unfortunately, there is a kernel of truth here. Once you've accepted the premise that it's acceptable to restrict citizen's rights to solve a problem, why shouldn't the burden fall on the the part of the population that the largest part of the problem? Of course part of the answer is that big city Democrats love of gun control is simply a mechanism to virtue signal on crime while avoiding conflict with their beloved ghetto pets. The other part is that they don't consider the second amendment a real right and they hate our guts.
          Reddit: systemtap linetimes.stp script usage      Cache   Translate Page      

I am using linetimes.stp to debug https://github.com/promach/riffa/blob/full_duplex/driver/linux/circ_queue.c#L88-L108

Why am I facing this error during its usage ?

phung@UbuntuHW15:~/Documents/riffa/driver/linux$ sudo stap linetimes.stp kernel push_circ_queue

semantic error: while resolving probe point: identifier 'kernel' at linetimes.stp:18:7

submitted by /u/promach
[link] [comments]
          Phoronix: The Linux Kernel In 2018 Finally Deems USB 3.0 Ubiquitous Rather Than An Oddity      Cache   Translate Page      
The latest news in the "it's about darn time" section is the Linux kernel's default i386/x86_64 kernel configurations will finally ship with USB 3.0 support enabled, a.k.a. CONFIG_USB_XHCI_HCD...
          Intel Edison Realtime kernel upgrade      Cache   Translate Page      

Hello all,

 

I am looking to provide my Intel Edison of real time kernel.

I have found the great work from FerryT about it here Edison images that actually build (including real time kernel) - solved

 

My project uses a GPS and two IMUs connected to the I2C bus. Is uses JAVA and UPM, MRAA libraries. Nothing more.

 

Currently the performance is not great as the IMU readings at 500Hz looses some samples from time to time. This is the reason why I would like to test a real time kernel.

My intention and will is to just replace the kernel and modules of my current image. I do not need anything else but to upgrade my kernel/modules to a RT one. I would do it on top of a real edison if possible.

 

My previous experiences with bitbake and yocto build system have been very annoying and I would love to stay away from it if possible. So if kernel and things-around can be compiled /installed the old-make-way, it would be great.

 

I have tried to build the dizzy-rt branch (meta-intel-edison) from FerryT but it fails in some recipes as the sites are no longer available.

 

What kernel would you recommend? Dizzy-rt or is there any other newer RT kernel?

What would be the minimum steps to build just the minimum needed for kernel/modules?

 

Many thanks


          October 2018 Patch Tuesday: Microsoft fixes 49 flaws, one APT-wielded zero-day      Cache   Translate Page      

With the October 2018 Patch Tuesday release Microsoft has fixed 49 vulnerabilities, 12 of which are rated “critical.” Previously known flaws and an actively exploited zero-day The only zero-day in this batch is CVE-2018-8453, an elevation of privilege vulnerability affecting Windows. Attackers must first gain access to the system, but then this vulnerability allows them to run arbitrary code in kernel mode and, ultimately, to install programs; view, change, or delete data; or create new … More

The post October 2018 Patch Tuesday: Microsoft fixes 49 flaws, one APT-wielded zero-day appeared first on Help Net Security.


          Microsoft Windows Kernel 'Win32k.sys' CVE-2018-8453 Local Privilege Escalation Vulnerability      Cache   Translate Page      
Type: Vulnerability. Microsoft Windows is prone to a local privilege-escalation vulnerability; fixes are available.
          Microsoft Windows DirectX Graphics Kernel CVE-2018-8484 Local Privilege Escalation Vulnerability      Cache   Translate Page      
Type: Vulnerability. Microsoft Windows is prone to a local privilege-escalation vulnerability; fixes are available.
          Microsoft Windows Kernel CVE-2018-8497 Local Privilege Escalation Vulnerability      Cache   Translate Page      
Type: Vulnerability. Microsoft Windows is prone to a local privilege-escalation vulnerability; fixes are available.
          Microsoft Windows Kernel CVE-2018-8330 Local Information Disclosure Vulnerability      Cache   Translate Page      
Type: Vulnerability. Microsoft Windows is prone to a local information-disclosure vulnerability; fixes are available.
          PatchGuard, the eWEEK opinion      Cache   Translate Page      

Originally posted on: http://maxblogson.net/archive/2006/11/02/95937.aspx

eWEEK Security Center Editor Larry Seltzer just published an article on eWEEK.com providing his opinion on the benefits, and limitations, of PatchGuard.

In the article, Larry reiterates some of the points I made in my post on PatchGuard a few days ago, namely:

  • Only 64-bit Windows versions are affected by PatchGuard.
  • 64-bit Windows versions, especially desktop versions, have puny market share.
  • The problems are limited to what can generally be called HIPS (Host Intrusion Prevention Systems).
  • Conventional security protection is unaffected by PatchGuard.
  • There is no documented, supported way for vendors to implement key HIPS functions in the face of PatchGuard.

As Larry mentions, HIPS primarily focuses on behavior blocking. In order to do that, it needs the ability to monitor certain kernel information such as the creation and manipulation of processes, image loading, and the creation of movement of memory.

The important thing to realize here is that most of the current security products (with the exception of those that are only HIPS products or that include HIPS features, which won't work on 64-bit Windows) will work just fine as they shouldn't be using anything that triggers interference from PatchGuard.

Yes, the security vendors that rely on HIPS do run in to the proverbial "brick wall" with PatchGuard. However, by removing the restrictions put in place by PatchGuard we are creating an inherently less secure environment. As I mentioned in my earlier post, one of the primary goals of PatchGuard is to ensure the integrity and security of the kernel.

The reality of the story is that as Microsoft is working to make Windows more secure by restricting the amount of access to kernel, the security industry publicly says "great" but internally cringes as it directly impacts their business. The unfortunate truth of this is that while some vendors are working to create products that circumvent PatchGuard (essentially hacking their way in to the kernel) they are giving credibility to the hacker community and proving in no uncertain terms that PatchGuard is vulnerable.

The fact that PatchGuard is vulnerable should not come as a surprise. It is virtually impossible to write an operating system that is actually usable and not have some level of vulnerabilities. According to CERT, for this year alone (Q1-Q3) there have been 5,340 vulnerabilities reported. Compare this to 345 reported 10 years ago.

All this is telling us is that as the complexity in operating systems and applications increases, so does the number of vulnerabilities. As the malware vendors have almost limitless amounts of time and resources to create malware, this trend will only increase (at least for the foreseeable future).

The longer we draw out debates over issues like PatchGuard, the longer it will take to create a more secure operating system. As a whole, the security industry has played catch-up to the malware industry. Rather than about the fact that legitimate security vendors are being "locked out" of hacking the kernel, we need to realize that while the legitimate vendors are being locked out, so also are the malware vendors. Rather than finding ways to circumvent PatchGuard, the industry needs to be finding ways to strengthen it.


          Kernel Patch Protection aka "PatchGuard"      Cache   Translate Page      

Originally posted on: http://maxblogson.net/archive/2006/10/30/95540.aspx

If anyone has been following this technology closely, there have been a lot of complaints by some of the security vendors regarding PatchGuard. I first heard about this technology at TechEd 2006 in a lot of the Vista sessions.

The recent controversy caused me to do a little more research in to this technology and the issues surrounding it.

The official name for this technology is called Kernel Patch Protection (KPP) and it's purpose is to increase the security and stability of the Windows kernel. KPP was first supported in Windows Server 2003 SP1, Windows XP, and Windows XP Professional Edition. The important thing to understand about this support is that it is for x64 architectures only.

KPP is a direct outgrowth of both customer complaints regarding the security and stability of the Windows kernel and Microsoft's Trustworthy Computing initiative, announced in early 2002.

In order to understand the controversy surrounding KPP, it is important to understand what KPP actually is and what aspects of the Windows operating system it deals with.

What is the Kernel?

The kernel is the "heart" of the operating system and is one of the first pieces of code to load when the operating system starts. Everything in Windows (and almost any operating system, for that matter) runs on a layer that sits on top of the kernel. This makes the kernel the primary factor in the performance, reliability and security of the entire operating system.

Since all other programs and many portions of the operating system itself depend on the kernel, any problems in the kernel can make those programs crash or behave in unexpected ways. The "Blue Screen of Death" (BSoD) in Windows is the result of an error in the kernel or a kernel mode driver that is so severe that the system can't recover.

What is Kernel Patching?

According to Microsoft's KPP FAQ, kernel patching (also known as kernel "hooking") is

the practice of using internal system calls and other unsupported mechanisms to modify or replace code or critical structures in the kernel of the Microsoft Windows operating system with unknown code or data. "Unknown code or data" is any code or data that is not provided by Microsoft as part of the Windows kernel.

What exactly, does that mean? The most common scenario is for programs to patch the kernel by changing a function pointer in the system service table (SST). The SST is an array of function pointers to in-memory system services. For example, if the function pointer to the NtCreateProcess method is changed, anytime the service dispatch invokes NtCreateProcess, it is actually running the third-party code instead of the kernel code. While the third-party code might be attempting to provide a valid extension to the kernel functionality, it could also be malicious.

Even though almost all of the Windows kernels have allowed kernel patching, it has always been an officially unsupported activity.

Kernel patching breaks the integrity of the Windows kernel and can introduce problems in three critical areas:

  • Reliability
    Since patching replaces kernel code with third-party code, this code can be untested. There is no way for the kernel to assess the quality of intent of this new code. Beyond that, kernel code is very complex, so bugs of any sort can have a significant impact on system stability.
  • Performance
    The overall performance of the operating system is largely determined by the performance of the kernel. Poorly designed third-party code can cause significant performance issues and can make performance unpredictable.
  • Security
    Since patching replaces known kernel code with potentially unknown third-party code, the intent of that third-party code is also unknown. This becomes a potential attack surface for malicious code.

Why Kernel Patch Protection?

As I mentioned earlier, the primary purpose of KPP is to protect the integrity of the kernel and improve the reliability, performance, and security of the Windows operating systems. This is becoming increasingly more important with the prevalence of malicious software that is implementing "root kits". A root kit is a specific type of malicious software (although it is usually included as a part of another, larger, piece of software) that uses a variety of techniques to gain access to a computer. Increasingly, root kits are becoming more sophisticated and are attacking the kernel itself. If the rootkit can gain access to the kernel, it can actually hide itself from the file system and even from any anti-malware tools. Root kits are typically used by malicious software, however, they have also been used by large legitimate businesses, including Sony.

While KPP is a good first step at preventing such attacks, it is not a "magic bullet". It does eliminate one way to attack the system...patching kernel images to manipulate kernel functionality. KPP takes the approach that there is no reliable way for the operating system to distinguish between "known good" and "known bad" components, so it prevents anything from patching the kernel. The only official way to disable KPP is by attaching a kernel debugger to the system.

KPP monitors certain key resources used by the kernel to determine if they have been modified. If the operating system detects that one of these resources has been modified it generates a "bug check", which is essentially a BSoD, and shuts down the system. Currently the following actions trigger this behavior:

  • Modifying system service tables.
  • Modifying the interpret descriptor table (IDT).
  • Modifying the global descriptor table (GDT).
  • Using kernel stacks that are not allocated by the kernel.
  • Patching any part of the kernel. This is currently detected only on AMD64-based systems.

Why x64?

At this point, you may begin to wonder why Microsoft chose to implement this on x64 based systems only. Microsoft is again responding to customer complaints in this decision. Implementing KPP will almost certainly impact comparability of many legitimate software, primarily security software such as anti-virus and anti-malware tools, which were built using unsupported kernel patching techniques. This would cause a huge impact on the consumer and also on Microsoft's partners. Since x64-based machines still make up the smaller install base (although they are gaining ground rapidly) and the majority of x64-based software has been rewritten to take advantage of the newer architecture, the impact is considered to be substantially smaller.

So...why the controversy?

Since KPP prevents an application or driver from modifying the kernel, it will, effectively, prevent that application or driver from running. KPP in Vista x64 requires any application drivers be digitally signed, although there are some non-intuitive ways to turn that off. (Turning off signed drivers does prevent certain other aspects of Windows from operating, such as being able to view DRM protected media.) However, all that really means is anyone with a legitimately created company and about $500 per year to spend can get the required digital signature from VeriSign. Unfortunately, even it is a reputable company, it still doesn't provide any guarantees as to the reliability, performance, and security of the kernel.

In order for software (or drivers) to work properly on an operating system that implements KPP, the software must use Microsoft-documented interfaces. If what you are trying to do doesn't have such an interface, then you cannot safely use that functionality. This is what has lead to the controversy. The security vendors are saying that the interfaces they require are not publicly documented by Microsoft (or not yet at any rate) but that Microsoft's own security offerings (Windows OneCare, Windows Defender, and Windows Firewall) are able to work properly and use undocumented interfaces. The security vendors want to "level the playing field".

There are many arguments on both sides of the issue, but it seems that many of them are not thought out completely. Symantec and McAfee have argued that the legitimate security vendors be granted exceptions to KPP using some sort of signing process. (See the TechWeb article.) However, this is fraught with potential problems. As I mentioned earlier, there is currently no reliable way to verify that code is actually from a "known good" source. The closest we can come to that is by digital signing, however, a piece of malicious code can simply include enough pieces from a legitimate "known good" source and hook into the exception.

So lets say, for arguments sake, that Microsoft does relent and is able to come up with some sort of exception mechanism that minimizes (or even removes) the chance of abuse. What happens next? Windows Vista, in particular, already includes an array of new features to provide security vendors ways to work within the KPP guidelines.

The Windows Filtering Platform (WFP) is one such example. WFP enables software to perform network related activities, such as packet inspection and other firewall type activities. In addition to WFP, Vista implements an entirely new TCP stack. This new stack has some fundamentally different behavior than the existing TCP stack on Windows. We also have network cards that implement hardware based stacks to perform what is called "chimney offload", which effectively bypasses large portions of the software based TCP stack. Hooking the network related kernel functions (as a lot of software based firewalls currently do), will miss all of the traffic on a chimney offload based network card. However, hooking in to WFP will catch that traffic.

Should Microsoft stop making technological innovations in the Windows kernel simply because there are a handful of partners and other ISVs that are complaining? The important thing to realize is that KPP is not new in Windows Vista. It has been around since Windows XP 64-bit edition was released. Why is it now that the security vendors are realizing that their products don't work properly on the x64-based operating systems? The main point Microsoft is trying to get across is that most of the functionality required doesn't have to be done in the kernel. Microsoft has been working for the last few years trying to assist their security partners in making their solutions compatible. If there is an interface that isn't documented, or functionality that a vendor believes can only be accomplished by patching the kernel, they can contact their Microsoft representative or email msra@microsoft.com for help finding a documented alternative. According to the KPP FAQ, "if no documented alternative exists...the functionality will not be supported on the relevant Windows operating system version(s) that include patch protection support."

I think the larger controversy is the fact that there are now documented ways to break KPP. This is where Microsoft and it's security partners and other security ISVs should be spending their time and energy. If we are going to have a reliable and secure kernel, we need to focus on locking down the kernel so that no one is able to breach it, including the hackers. This is an almost endless process, as the attackers generally have almost infinite amounts of time to bring their "products" to market and don't really have an quality issues to worry about. Even with the recent introduction by Intel and AMD of hardware based virtualation technology (which essentially creates a virtual mini-core processor that can run a specially created operating system), there is still a long way to go.

What's next?

While it is important to understand the goals of KPP and the potential avenues of attack against it, the most important thing for the security community to focus on is in making sure that the Windows kernel stays safe. The best way to do this is to keep shrinking the attack surface until it is almost non-existent. There will always be an attack surface, however, the smaller that surface becomes the easier it is to protect. Imagine guarding a vault. If there is only one way in and out, and that entrance is only 2-feet wide it is much more easily guarded than a vault that has 2 entrances, each of which are 30-feet wide.

However, as malware technology advances it is important for the security technology that tries to protect against it to advance as well. In fact, the security technology really needs to be ahead of the malware if it is to be successful. PatchGuard has already been hacked, some of the proposed Microsoft APIs for KPP won't be available until sometime in 2008, and the security vendors do have legitimate reasons for needing to access certain portions of the kernel.

Host Intrusion Prevention Systems (HIPS), for instance, uses kernel access to prevent certain types of attacks, such has buffer overflow attacks or process injection attacks, by watching for system functions being called from memory locations where they shouldn't be called. The Code Red Worm would not have been detected if only file-based protection systems were in use.

The bottom line is that the malware vendors are unpredictable and not bound by any legal, moral, or ethical constraints. They are also not bound by customer reviews, deadlines, and code quality. The security vendors and Microsoft need to work together to ensure that the attack surface for the kernel and Windows itself is small and stays small. They can do this by:

  • Establishing a more reliable way to authenticate security vendors and their products that will prevent "spoofing" by the malware vendors.
  • Minimizing the attack surface of the Windows Kernel.
  • Establishing documented APIs to interact with the kernel to perform security related functions, such as firewall activities.
  • Enforcing driver signatures...in other words, don't allow this mechanism to be turned off. At least don't allow it to be turned off for certain security critical drivers.
  • Enforcing security software digital signatures. If you want your security tool to run, it must be signed. Again, don't allow this mechanism to be turned off.
  • Establishing a more secure way for the security products to hook in to the kernel.
  • Restricting products to patching only specific areas of the kernel. Currently, it is possible to patch almost any portion of the kernel.
  • Enforcing Windows certification testing for any security products.

          [Advocacy] Re: Apple A12 CPU - Razbijac      Cache   Translate Page      
Citat: "[url=/p3857814]Ivan Dimkovic[/url]:  Prakticno ostaje da se: 1. Portuju asemblerske rutine (kerneli za procesiranje slika / videa / muzike itd.) 2. Izbace Intel-specificni intrinsic-i (mada tu Apple moze i te kako da pomogne dodavanjem svojih / 'emulacijom' preko kompajlerskih makroa i sl.) 3. Eliminisu problemi ako se neki delovi koda oslanjaju na Intel ponasanje (recimo oko atomicnosti operacija / pristupa memoriji), mada i tu Apple moze da pomogne oponasajuci Intel pona...
          It's October 2018, and Exchange can be pwned by an 8 year-old... bug      Cache   Translate Page      

Microsoft has released the October edition of its monthly security update, addressing a total of 49 CVE-listed bugs.

DLL bug a blast from the past

Among the 49 fixes were three issues that have already been publicly disclosed and a fourth that was being targeted in the wild. On top of that, a remote code execution bug in Exchange Server is the resurfacing of a vulnerability first found in 2010.

CVE-2010-3190 is a remote code execution bug created by insecure handling of DLL files in applications made with Microsoft Foundation Classes. The issue was covered extensively by Microsoft back in 2010, but because these sort of flaws are notoriously difficult to root out, the issue was only recently found in Exchange Server 2010 SP3, 2013, and 2016.

Kaspersky Labs took credit for discovering and reporting the active attacks on CVE-2018-8453 . This elevation of privilege flaw in the way Win32K handles drivers allows attackers to run their code with kernel mode access, granting the ability to do things like create new accounts and full ability to write or delete data.

Also publicly reported, but not exploited, were CVE-2018-8423 a remote code execution bug in JET Database Engine for windows, CVE-2018-8497 a Windows Kernel Elevation of Privilege vulnerability and CVE-2018-8531 , a remote code execution flaw in Azure IoT device client that would be exploited via a malicious email or message attachment.

Dustin Childs, researcher with Trend Micro's Zero Day Initiative, singled out CVE-2018-8492 , a security bypass flaw in Device Guard, as a particularly dangerous issue that admins should pay special attention to.

"This patch corrects a vulnerability that could allow an attacker to inject malicious code into a Windows PowerShell session," Childs explained .

"This may not seem too bad on the surface, but it’s just the type of thing used by fileless malware."

Microsoft is also warning that two remote code execution flaws in Hyper-V, CVE-2018-8489 and CVE-2018-8492 can be exploited by guest VMs to execute code on the host machine, and should also be a priority for admins.

As is often the case, Microsoft's Edge and Internet Explorer browsers, along with the Chakra Scripting Engine for Edge, were the subject of a number of critical remote code execution bugs that would be targeted via malicious websites.

For Office, Microsoft posted patches for security feature bypass flaws in PowerPoint ( CVE-2018-8501 ), Excel ( CVE-2018-8502 ), and Word ( CVE-2018-8504 ).

Adobe delivers second patches of the month

Hot on the heels oflast week's giant Acrobat and Reader security update, Adobe has posted fixes for vulnerabilities in four of its products.

For Digital Editions , the update will patch nine CVE-listed vulnerabilities that could allow remote code execution. The Adobe Experience Manager update addresses five cross-site scripting vulnerabilities, while an update for Framemaker includes fixes for a single privilege escalation flaw.

Finally, a fix for the Adobe Technical Communications Suite addresses a single privilege escalation flaw from insecure library handling.

Don't forget Android

While you're out there installing patches, it's worth noting that last week Google also posted its October security bulletin for Android with new fixes, including a number of remote code execution bugs in the Media framework and and System components.

Google will get the update out to its branded devices, while other Android devices will need to be updated through their respective vendors.

Sponsored: Following Bottomline’s journey to the Hybrid Cloud


          Debian GNU/Linux 9 "Stretch" Gets New Kernel Patch to Fix Two Security Flaws ...      Cache   Translate Page      

The Debian Project published a new linux security advisory to inform users of the Debian GNU/Linux 9 "Stretch" operating system series about a new kernel security patch that fixes two vulnerabilities.

Coming just a week after the latest major kernel security update forDebian GNU/Linux 9 "Stretch," the new Linux kernel security patch is here to address a flaw ( CVE-2018-15471 ) discovered by Google Project Zero's Felix Wilhelm in the hash handling of Linux kernel's xen-netback module, which could result in information leaks, privilege escalation, as well as denial of service.

"Felix Wilhelm of Google Project Zero discovered a flaw in the hash handling of the xen-netback Linux kernel module. A malicious or buggy frontend may cause the (usually privileged) backend to make out of bounds memory accesses, potentially resulting in privilege escalation, denial of service, or information leaks," reads the security advisory published bySalvatore Bonaccorso.

All Debian Stretch users are urged to update their systems

The new kernel security patch also addresses a privilege escalation flaw ( CVE-2018-18021 ) discovered in Linux kerne's Kernel-based Virtual Machine (KVM) subsystem on AArch64 (ARM64) architectures, which could let an attacker create a denial of service (hypervisor panic) or redirect the hypervisor flow of control with complete register control.

To fix these two security vulnerabilities, the Debian Project recommends all users of the Debian GNU/Linux 9 "Stretch" operating system series to update the kernel packages to version 4.9.110-3+deb9u6, which is now available for download from the main archives. To update your systems, run the " sudo apt-get update && sudo apt-get full-upgrade " command in a terminal emulator. The new kernel version will replace last week's4.9.110-3+deb9u5 kernel, which fixed no less than 18 vulnerabilities.


          If You're Typing the Word MCRYPT Into Your PHP Code, You're Doing It Wrong      Cache   Translate Page      

Foreword: You probably should not be deploying your own cryptography to begin with, especially if you don't already understand that encryption is not authentication . For production systems, use PECL libsodium or defuse/php-encryption and save yourself the headache.

The rest of this post is intended for PHP developers who still want to write their own cryptography code, or already have.

Top 3 Reasons to Avoid Mcrypt I. Mcrypt is Abandonware

PHP's optional mcrypt extension provides bindings for a cryptography library called libmcrypt , which has been collecting dust since 2007 (eight years and counting) despite plenty of bugs , some which even have patches available .

If bit rot weren't enough reason to avoid using this library, the major design flaws which make it easier to write insecure code than it is to write secure code should.

II. It's Confusing and Counter-Intuitive

Look at this list of mcrypt ciphers and tell me how you would implement AES-256-CBC . If your code looks like this, you've just run headfirst into the first (and arguably most common) mcrypt design wart:

function encryptOnly($plaintext, $key) { $iv = mcrypt_create_iv(16, MCRYPT_DEV_URANDOM); $ciphertext = mcrypt_encrypt(MCRYPT_RIJNDAEL_256, $key, $plaintext, MCRYPT_MODE_CBC, $iv); return $iv.$ciphertext; }

Surprise! MCRYPT_RIJNDAEL_256 doesn't mean AES-256 .

All variants of AES use a 128-bit block size with varying key lengths (128, 192, or 256). This means that MCRYPT_RIJNDAEL_128 is the only correct choice if you want AES.

MCRYPT_RIJNDAEL_192 and MCRYPT_RIJNDAEL_256 instead refer to non-standard, less-studied variants of the Rijndael block cipher that operate on larger blocks.

Considering that AES-256 has much worse key scheduling than AES-128 , it's not at all unreasonable to suspect there might be unknown weaknesses in the non-standard Rijndael variants that are not present in the standardized 128-bit block size version of the algorithm. At the very least, it makes interoperability with other encryption libraries that only implement AES a challenge.

Isn't it great that mcrypt makes you feel dumb for not knowing details that you probably shouldn't really need to know? Don't worry, it gets worse.

III. Null Padding

We already stated that not authenticating your ciphertexts is a bad idea, and in all fairness, padding oracle attacks are going to be a problem in CBC (Cipher Block Chaining) mode no matter what padding scheme you select if you fail to Encrypt then MAC .

If you encrypt your message with mcrypt_encrypt() , you have to choose between writing your own plaintext padding strategy or using the one mcrypt implements by default: zero-padding.

To see why zero-padding sucks, let's encrypt then decrypt a binary string in AES-128-CBC using mcrypt (The result of running this code is available here ):

$key = hex2bin('000102030405060708090a0b0c0d0e0f'); $message = hex2bin('5061726101676f6e000300'); $iv = mcrypt_create_iv(16, MCRYPT_DEV_URANDOM); $encrypted = mcrypt_encrypt(MCRYPT_RIJNDAEL_128, $key, $message, MCRYPT_MODE_CBC, $iv); $decrypted = mcrypt_decrypt(MCRYPT_RIJNDAEL_128, $key, $encrypted, MCRYPT_MODE_CBC, $iv); // This should still be padded: var_dump(bin2hex($decrypted)); // Let's strip off the padding: $stripped = rtrim($decrypted, "\0"); var_dump(bin2hex($stripped)); // Does this equal the original message? var_dump($stripped === $message);

As you can see, padding a plaintext with zero bytes can lead to a loss of data. A much safer alternative is to use PKCS7 padding.

OpenSSL Does It Better

Here is an example of an unauthenticated AES-256-CBC encryption library written in Mcrypt with PKCS7 padding.

/** * This library is unsafe because it does not MAC after encrypting */ class UnsafeMcryptAES { const CIPHER = MCRYPT_RIJNDAEL_128; public static function encrypt($message, $key) { if (mb_strlen($key, '8bit') !== 32) { throw new Exception("Needs a 256-bit key!"); } $ivsize = mcrypt_get_iv_size(self::CIPHER); $iv = mcrypt_create_iv($ivsize, MCRYPT_DEV_URANDOM); // Add PKCS7 Padding $block = mcrypt_get_block_size(self::CIPHER); $pad = $block - (mb_strlen($message, '8bit') % $block, '8bit'); $message .= str_repeat(chr($pad), $pad); $ciphertext = mcrypt_encrypt( MCRYPT_RIJNDAEL_128, $key, $message, MCRYPT_MODE_CBC, $iv ); return $iv . $ciphertext; } public static function decrypt($message, $key) { if (mb_strlen($key, '8bit') !== 32) { throw new Exception("Needs a 256-bit key!"); } $ivsize = mcrypt_get_iv_size(self::CIPHER); $iv = mb_substr($message, 0, $ivsize, '8bit'); $ciphertext = mb_substr($message, $ivsize, null, '8bit'); $plaintext = mcrypt_decrypt( MCRYPT_RIJNDAEL_128, $key, $ciphertext, MCRYPT_MODE_CBC, $iv ); $len = mb_strlen($plaintext, '8bit'); $pad = ord($plaintext[$len - 1]); if ($pad <= 0 || $pad > $block) { // Padding error! return false; } return mb_substr($plaintext, 0, $len - $pad, '8bit'); } }

And here's the library written using OpenSSL.

/** * This library is unsafe because it does not MAC after encrypting */ class UnsafeOpensslAES { const METHOD = 'aes-256-cbc'; public static function encrypt($message, $key) { if (mb_strlen($key, '8bit') !== 32) { throw new Exception("Needs a 256-bit key!"); } $ivsize = openssl_cipher_iv_length(self::METHOD); $iv = openssl_random_pseudo_bytes($ivsize); $ciphertext = openssl_encrypt( $message, self::METHOD, $key, OPENSSL_RAW_DATA, $iv ); return $iv . $ciphertext; } public static function decrypt($message, $key) { if (mb_strlen($key, '8bit') !== 32) { throw new Exception("Needs a 256-bit key!"); } $ivsize = openssl_cipher_iv_length(self::METHOD); $iv = mb_substr($message, 0, $ivsize, '8bit'); $ciphertext = mb_substr($message, $ivsize, null, '8bit'); return openssl_decrypt( $ciphertext, self::METHOD, $key, OPENSSL_RAW_DATA, $iv ); } }

In almost every metric, openssl wins over mcrypt:

Specifying 'aes-256-cbc' is much more obvious than remembering to use MCRYPT_RIJNDAEL_128 with a 32-byte binary key. openssl_encrypt() performs PKCS7 padding by default, and lets you specify OPENSSL_ZERO_PADDING if you really want it. The code you write ends up much more compact and readable, with less room for implementation errors. It performs AES encryption/decryption much faster, since it supports AES-NI if your processor has this feature. AES-NI also means you don't have to worry about an attacker recovering your secret key from cache-timing information. OpenSSL is being actively developed and maintained. In response of the Heartbleed vulnerability last year, several organizations (including the linux Foundation) declared the project critical Internet infrastructure and began pouring resources into finding and fixing bugs in the system. If you still don't trust it, there's always LibreSSL .

Simplicity, security, and performance. What more is there to ask for?

There are, however, two things with OpenSSL that you should watch out for.

OpenSSL Gotchas The CSPRNG they offer is a userspace PRNG based on hash functions, which goes against the advice of Thomas Ptacek to use /dev/urandom . The only one-liner alternative is mcrypt_create_iv() , as demonstrated above, but this function is only exposed if you enable the mcrypt extension. Fortunately, PHP 7 will offer a core random_bytes() function that leverages the kernel's CSPRNG. Although your version of OpenSSL might list GCM based cipher modes (e.g. aes-128-gcm ), PHP doesn't actually support these methods yet. In Sum

Don't use mcrypt . If you're typing the word mcrypt into your code, you're probably making a mistake. Although it's possible to provide a relatively secure cryptography library that builds on top of mcrypt (the earlier version of defuse/php-encryption did), switching your code to openssl will provide better security, performance, maintainability, and portability.

Even better:use libsodium instead.


          Instalar Ubuntu en Windows 10 ahora es más sencillo, te contamos cómo      Cache   Translate Page      

Ubuntu es una excelente distribución basada en el kernel Linux, el uso de este sistema es relativamente fácil, y su interfaz gráfica está muy bien diseñada, es por eso que cada vez este tipo de distribuciones son más populares. Si te interesa instalar o simplemente probar un nuevo OS como este, quédate, en este artículo […]

The post Instalar Ubuntu en Windows 10 ahora es más sencillo, te contamos cómo appeared first on Soluciones WinDroid.


          Vorke Z6 Plus firmware problem      Cache   Translate Page      
I bought this device from you (geekbuying.com)and came with this software (q201- userdebug 7.1.2 NHG47L 20180606 test kernel version 3.14.29 server @ ubuntu # 4 Wed Jun 6). I accidentally uploaded it to the firmware (q201- userdebug 7.1.2 NHG47L 20180126 test kernel version 3.14.29 server @ ubuntu # 3 Fri Jan 26) which would mean that I downgraded it. Where can i download this newer version (20180606) which was in it I searched it all and can not find it anywhere. Always give me (20180126) as the latest firmware?
          Microsoft Joins the Open Invention Network, NVIDIA Announces RAPIDS, Asterisk 16.0.0 Now Available, BlockScout Released and Security Advisory for Debian GNU/Linux 9 "Stretch"      Cache   Translate Page      

News briefs for October 10, 2018.

Microsoft has joined the Open Invention Network (OIN), an open-source patent consortium. According to ZDNet, this means "Microsoft has essentially agreed to grant a royalty-free and unrestricted license to its entire patent portfolio to all other OIN members." OIN's CEO Keith Bergelt says "This is everything Microsoft has, and it covers everything related to older open-source technologies such as Android, the Linux kernel, and OpenStack; newer technologies such as LF Energy and HyperLedger, and their predecessor and successor versions."

NVIDIA has just announced RAPIDS, its open-source data analytics/machine learning platform, Phoronix reports. The project is "intended as an end-to-end solution for data science training pipelines on graphics processors", and NVIDIA "laims that RAPIDS can allow for machine learning training at up to 50x and is built atop CUDA for GPU acceleration".

The Asterisk Development Team announces that Asterisk 16.0.0 is now available. This version includes many security fixes, new features and tons of bug fixes. You can download it from here.

BlockScout, the first full-featured open-source Ethereum block explorer tool, was released yesterday by POA Network. The secure and easy-to-use tool "lets users search and explore transactions, addresses, and balances on the Ethereum, Ethereum Classic, and POA Network blockchains". And, because it's open source, anyone can "contribute to its development and customize the tool to suit their own needs".

Debian has published another security advisory for Debian GNU/Linux 9 "Stretch". According to Softpedia News, CVE-2018-15471 was "discovered by Google Project Zero's Felix Wilhelm in the hash handling of Linux kernel's xen-netback module, which could result in information leaks, privilege escalation, as well as denial of service". The patch also addresses CVE-2018-18021, a privilege escalation flaw. The Debian Project recommends that all users of GNU/Linux 9 "Stretch" update kernel packages to to version 4.9.110-3+deb9u6.


          Help Wanted - Kernels - Markham, ON      Cache   Translate Page      
Now hiring at 5000 Highway 7,...
From Job Spotter - Sat, 01 Sep 2018 18:21:45 GMT - View all Markham, ON jobs
          PostgreSQL Database Administrator - Upgrade - Montreal, WI      Cache   Translate Page      
Solid Linux fundamentals including kernel and OS tuning, as they relate to DB performance and security. Upgrade is a consumer credit platform that is changing...
From Upgrade - Wed, 22 Aug 2018 22:02:31 GMT - View all Montreal, WI jobs
          Microsoft Joins the Open Invention Network, NVIDIA Announces RAPIDS, Asterisk 16.0.0 Now Available, BlockScout Released and Security Advisory for Debian GNU/Linux 9 "Stretch"      Cache   Translate Page      

News briefs for October 10, 2018.

Microsoft has joined the Open Invention Network (OIN), an open-source patent consortium. According to ZDNet, this means "Microsoft has essentially agreed to grant a royalty-free and unrestricted license to its entire patent portfolio to all other OIN members." OIN's CEO Keith Bergelt says "This is everything Microsoft has, and it covers everything related to older open-source technologies such as Android, the Linux kernel, and OpenStack; newer technologies such as LF Energy and HyperLedger, and their predecessor and successor versions."

NVIDIA has just announced RAPIDS, its open-source data analytics/machine learning platform, Phoronix reports. The project is "intended as an end-to-end solution for data science training pipelines on graphics processors", and NVIDIA "laims that RAPIDS can allow for machine learning training at up to 50x and is built atop CUDA for GPU acceleration".

The Asterisk Development Team announces that Asterisk 16.0.0 is now available. This version includes many security fixes, new features and tons of bug fixes. You can download it from here.

BlockScout, the first full-featured open-source Ethereum block explorer tool, was released yesterday by POA Network. The secure and easy-to-use tool "lets users search and explore transactions, addresses, and balances on the Ethereum, Ethereum Classic, and POA Network blockchains". And, because it's open source, anyone can "contribute to its development and customize the tool to suit their own needs".

Debian has published another security advisory for Debian GNU/Linux 9 "Stretch". According to Softpedia News, CVE-2018-15471 was "discovered by Google Project Zero's Felix Wilhelm in the hash handling of Linux kernel's xen-netback module, which could result in information leaks, privilege escalation, as well as denial of service". The patch also addresses CVE-2018-18021, a privilege escalation flaw. The Debian Project recommends that all users of GNU/Linux 9 "Stretch" update kernel packages to to version 4.9.110-3+deb9u6.


          Linux Kernel Patches Posted For Streebog - Crypto From Russia's FSB      Cache   Translate Page      
Just months after the controversial Speck crypto code was added to the Linux kernel that raised various concerns due to its development by the NSA and potential backdoors, which was then removed from the kernel tree, there is now Russia's Streebog that could be mainlined...
          The Linux Kernel In 2018 Finally Deems USB 3.0 Ubiquitous Rather Than An Oddity      Cache   Translate Page      
The latest news in the "it's about darn time" section is the Linux kernel's default i386/x86_64 kernel configurations will finally ship with USB 3.0 support enabled, a.k.a. CONFIG_USB_XHCI_HCD...
          Embedded Software Developer - Le Groupe TGC - Montréal, QC      Cache   Translate Page      
Significant experience with Real-Time operating systems, for example:. O Kernel development contribution. Ability to communicate effectively in English and...
From Le Groupe TGC - Wed, 20 Jun 2018 08:42:13 GMT - View all Montréal, QC jobs
          Microsoft bringt 60.000 Patente in Open-Source-Konsortium ein      Cache   Translate Page      

Microsoft ist dem Open-Source-Konsortium Open Innovation Network beigetreten. Der Softwareriese stellt den Mitgliedern des Netzwerks 60.000 eigene Patente zur Verfügung.

Das 2005 gegründete Open Invention Network (OIN), zu dem Konzerne wie Google, IBM, Red Hat und Suse gehören, hat sich dem Schutz von Linux und anderer Open-Source-Software vor Patentklagen verschrieben. Jetzt ist auch Softwareriese Microsoft dem Konsortium beigetreten und bringt über 60.000 eigene Patente in die Linux-Vereinigung mit ein, wie Microsoft-Manager Erich Andersen in einem Blogbeitrag erklärt.

Bisher hatten die rund 2.650 Mitglieder der OIN-Community gut 1.300 Patente und Applikationen angehäuft, schreibt ZDNet.com. Die zehntausenden Microsoft-Patente, die den OIN-Mitgliedern jetzt zur Nutzung zur Verfügung stehen, betreffen ältere Open-Source-Technologien wie Android, den Linux-Kernel und Openstack sowie neuere Technologien wie LF Energy und Hyperledger. Was das für Microsoft bedeutet? Allein mit Android-Patenten soll der Konzern bis 2014 rund 3,4 Milliarden US-Dollar erwirtschaftet haben.

Microsoft begräbt Streit mit der Open-Source-Community

Der Schritt kommt daher ziemlich überraschend, wie auch Andersen weiß. Es sei kein Geheimnis, dass es in der Vergangenheit durchaus Reibungspunkte zwischen Microsoft und der Open-Source-Community gegeben habe, wenn es um die Freigabe von Softwarepatenten ging. Bei Microsoft hat es laut dem hochrangigen Microsoft-Manager Scott Guthrie einen grundlegenden Wandel in der Philosophie gegeben. Mit der Öffnung des Patentportfolios für die OIN wolle Microsoft seinen Teil zum Schutz von Open-Source-Projekten vor rechtlichen Problemen beitragen, so Guthrie gegenüber ZDNet.

Zuvor hatte der Softwarekonzern schon 10.000 Patente in den Ring geworfen, um Nutzer seiner Cloudplattform Azure vor möglichen Rechtsstreitigkeiten zu schützen. Außerdem war Microsoft erst in der vergangenen Woche der Anti-Patenttroll-Initiative LOT Network beigetreten. Deren 300 Mitglieder, darunter Amazon, Canon, Cisco, Lenovo, Tesla und SAP verfügen zusammen über rund 1,35 Millionen Patente. Dem LOT Network zufolge sollen schon über 10.000 Unternehmen mindestens einmal von einem Patenttroll verklagt worden sein – die Verteidigung kostet im Schnitt 3,2 Millionen Dollar.

Mehr zum Thema:


          Control Flow Integrity in the Android kernel      Cache   Translate Page      

Posted by Sami Tolvanen, Staff Software Engineer, Android Security

Android's security model is enforced by the Linux kernel, which makes it a tempting target for attackers. We have put a lot of effort into hardening the kernel in previous Android releases and in Android 9, we continued this work by focusing on compiler-based security mitigations against code reuse attacks.

Google's Pixel 3 will be the first Android device to ship with LLVM's forward-edge Control Flow Integrity (CFI) enforcement in the kernel, and we have made CFI support available in Android kernel versions 4.9 and 4.14. This post describes how kernel CFI works and provides solutions to the most common issues developers might run into when enabling the feature.

Protecting against code reuse attacks

A common method of exploiting the kernel is using a bug to overwrite a function pointer stored in memory, such as a stored callback pointer or a return address that had been pushed to the stack. This allows an attacker to execute arbitrary parts of the kernel code to complete their exploit, even if they cannot inject executable code of their own. This method of gaining code execution is particularly popular with the kernel because of the huge number of function pointers it uses, and the existing memory protections that make code injection more challenging.

CFI attempts to mitigate these attacks by adding additional checks to confirm that the kernel's control flow stays within a precomputed graph. This doesn't prevent an attacker from changing a function pointer if a bug provides write access to one, but it significantly restricts the valid call targets, which makes exploiting such a bug more difficult in practice.

Figure 1. In an Android device kernel, LLVM's CFI limits 55% of indirect calls to at most 5 possible targets and 80% to at most 20 targets.

Gaining full program visibility with Link Time Optimization (LTO)

In order to determine all valid call targets for each indirect branch, the compiler needs to see all of the kernel code at once. Traditionally, compilers work on a single compilation unit (source file) at a time and leave merging the object files to the linker. LLVM's solution to CFI is to require the use of LTO, where the compiler produces LLVM-specific bitcode for all C compilation units, and an LTO-aware linker uses the LLVM back-end to combine the bitcode and compile it into native code.

Figure 2. A simplified overview of how LTO works in the kernel. All LLVM bitcode is combined, optimized, and generated into native code at link time.

Linux has used the GNU toolchain for assembling, compiling, and linking the kernel for decades. While we continue to use the GNU assembler for stand-alone assembly code, LTO requires us to switch to LLVM's integrated assembler for inline assembly, and either GNU gold or LLVM's own lld as the linker. Switching to a relatively untested toolchain on a huge software project will lead to compatibility issues, which we have addressed in our arm64 LTO patch sets for kernel versions 4.9 and 4.14.

In addition to making CFI possible, LTO also produces faster code due to global optimizations. However, additional optimizations often result in a larger binary size, which may be undesirable on devices with very limited resources. Disabling LTO-specific optimizations, such as global inlining and loop unrolling, can reduce binary size by sacrificing some of the performance gains. When using GNU gold, the aforementioned optimizations can be disabled with the following additions to LDFLAGS:

LDFLAGS += -plugin-opt=-inline-threshold=0 \
           -plugin-opt=-unroll-threshold=0

Note that flags to disable individual optimizations are not part of the stable LLVM interface and may change in future compiler versions.

Implementing CFI in the Linux kernel

LLVM's CFI implementation adds a check before each indirect branch to confirm that the target address points to a valid function with a correct signature. This prevents an indirect branch from jumping to an arbitrary code location and even limits the functions that can be called. As C compilers do not enforce similar restrictions on indirect branches, there were several CFI violations due to function type declaration mismatches even in the core kernel that we have addressed in our CFI patch sets for kernels 4.9 and 4.14.

Kernel modules add another complication to CFI, as they are loaded at runtime and can be compiled independently from the rest of the kernel. In order to support loadable modules, we have implemented LLVM's cross-DSO CFI support in the kernel, including a CFI shadow that speeds up cross-module look-ups. When compiled with cross-DSO support, each kernel module contains information about valid local branch targets, and the kernel looks up information from the correct module based on the target address and the modules' memory layout.

Figure 3. An example of a cross-DSO CFI check injected into an arm64 kernel. Type information is passed in X0 and the target address to validate in X1.

CFI checks naturally add some overhead to indirect branches, but due to more aggressive optimizations, our tests show that the impact is minimal, and overall system performance even improved 1-2% in many cases.

Enabling kernel CFI for an Android device

CFI for arm64 requires clang version >= 5.0 and binutils >= 2.27. The kernel build system also assumes that the LLVMgold.so plug-in is available in LD_LIBRARY_PATH. Pre-built toolchain binaries for clang and binutils are available in AOSP, but upstream binaries can also be used.

The following kernel configuration options are needed to enable kernel CFI:

CONFIG_LTO_CLANG=y
CONFIG_CFI_CLANG=y

Using CONFIG_CFI_PERMISSIVE=y may also prove helpful when debugging a CFI violation or during device bring-up. This option turns a violation into a warning instead of a kernel panic.

As mentioned in the previous section, the most common issue we ran into when enabling CFI on Pixel 3 were benign violations caused by function pointer type mismatches. When the kernel runs into such a violation, it prints out a runtime warning that contains the call stack at the time of the failure, and the call target that failed the CFI check. Changing the code to use a correct function pointer type fixes the issue. While we have fixed all known indirect branch type mismatches in the Android kernel, similar problems may be still found in device specific drivers, for example.

CFI failure (target: [<fffffff3e83d4d80>] my_target_function+0x0/0xd80):
------------[ cut here ]------------
kernel BUG at kernel/cfi.c:32!
Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
…
Call trace:
…
[<ffffff8752d00084>] handle_cfi_failure+0x20/0x28
[<ffffff8752d00268>] my_buggy_function+0x0/0x10
…

Figure 4. An example of a kernel panic caused by a CFI failure.

Another potential pitfall are address space conflicts, but this should be less common in driver code. LLVM's CFI checks only understand kernel virtual addresses and any code that runs at another exception level or makes an indirect call to a physical address will result in a CFI violation. These types of failures can be addressed by disabling CFI for a single function using the __nocfi attribute, or even disabling CFI for entire code files using the $(DISABLE_CFI) compiler flag in the Makefile.

static int __nocfi address_space_conflict()
{
      void (*fn)(void);
 …
/* branching to a physical address trips CFI w/o __nocfi */
 fn = (void *)__pa_symbol(function_name);
      cpu_install_idmap();
      fn();
      cpu_uninstall_idmap();
 …
}

Figure 5. An example of fixing a CFI failure caused by an address space conflict.

Finally, like many hardening features, CFI can also be tripped by memory corruption errors that might otherwise result in random kernel crashes at a later time. These may be more difficult to debug, but memory debugging tools such as KASAN can help here.

Conclusion

We have implemented support for LLVM's CFI in Android kernels 4.9 and 4.14. Google's Pixel 3 will be the first Android device to ship with these protections, and we have made the feature available to all device vendors through the Android common kernel. If you are shipping a new arm64 device running Android 9, we strongly recommend enabling kernel CFI to help protect against kernel vulnerabilities.

LLVM's CFI protects indirect branches against attackers who manage to gain access to a function pointer stored in kernel memory. This makes a common method of exploiting the kernel more difficult. Our future work involves also protecting function return addresses from similar attacks using LLVM's Shadow Call Stack, which will be available in an upcoming compiler release.


          Solución a: kernel panic – not syncing: VFS: Unable to mount root fs unknown-block (0,0)      Cache   Translate Page      

Solución a: kernel panic – not syncing: VFS: Unable to mount root fs unknown-block (0,0). Al actualizar a un nuevo kernel Linux y posteriormente reiniciar el sistema, uno de los errores más comunes y a la vez más temidos es el bloqueo durante el proceso de arranque. Normalmente aparece el siguiente mensaje: kernel panic – not syncing: VFS: Unable to mount root fs unknown-block (0,0)   Este error se produce porque los módulos que deberían estar configurados como auto-load, no se han cargado. No te preocupes, tiene solución. En este articulo te propongo dos soluciones: La primera solución, es recomendable para

El articulo Solución a: kernel panic – not syncing: VFS: Unable to mount root fs unknown-block (0,0) fué publicado en Linux para todos


          Re: [PATCH AUTOSEL 4.18 24/58] Input: atakbd - fix Atari CapsLock ...      Cache   Translate Page      
Dmitry Torokhov writes: (Summary) On Wed, Oct 10, 2018 at 10:29:58AM -0400, Sasha Levin wrote: you keep it in the upstream kernel to begin with?
Because obviously there are users. Yes, the box may OOPS if someone manually unbind device through sysfs, but the solution is no to patch stable kernels, but simply tell user "dont to that [yet]". When selecting a patch for stable ask yourself: "if I do not pick this for stable will a distribution be willing to patch this into their kernel on their own"? If the answer is "no" it should be pretty strong indicator whether a patch belongs to stable or not.
          Re: [PATCH v5 1/2] memory_hotplug: Free pages as higher order      Cache   Translate Page      
Arun KS writes: (Summary) And use adjust_managed_page_count() instead of page_zone(page)->managed_pages += nr_pages;
of page_zone(page)->managed_pages += nr_pages;
https://lore.kernel.org/patchwork/patch/989445/
https://lore.kernel.org/patchwork/patch/989445/
-static void generic_online_page(struct page *page) +static int generic_online_page(struct page *page, unsigned int order) { - __online_page_set_limits(page); + + for (loop = 0 ; loop++, p++) { + __ClearPageReserved(p); + } + + adjust_managed_page_count(page, nr_pages);
          Re: [RFC PATCH 0/7] Introduce thermal pressure      Cache   Translate Page      
Daniel Lezcano writes: On 10/10/2018 17:35, Lukasz Luba wrote:
Is it single threaded compute-intensive?
aobench AFAICT
aobench AFAICT
It would be interesting if you can share the thermal profile of your board. It would be interesting if you can share the thermal profile of your board. It would be interesting if you can share the thermal profile of your board. create mode 100644 kernel/sched/thermal.h
create mode 100644 kernel/sched/thermal.h

          Bug introduced in the of_get_named_gpiod_flags function.      Cache   Translate Page      
Wojciech_Zabołotny writes: (Summary) Hi,
Hi,
The function of_get_named_gpiod_flags in older versions of the kernel (up to 4.7.10 - https://elixir.bootlin.com/linux/v4.7.10/source/drivers/gpio/gpiolib-of.c#L75 ) contained an important workaround:
contained an important workaround:
/* .of_xlate might decide to not fill in the flags, so clear it. the Xilinx AXI GPIO driver: https://github.com/Xilinx/linux-xlnx/blob/c2ba891326bb472da59b6a2da29aca218d337687/drivers/gpio/gpio-xilinx.c#L262 ) the random, unitialized value from the stack in of_find_gpio ( https://elixir.bootlin.com/linux/v4.18.13/source/drivers/gpio/gpiolib-of.c#L228 ) is used, which results in random settings of e.g., OPEN DRAIN or OPEN SOURCE mode.
          Bug introduced in the of_get_named_gpiod_flags function.      Cache   Translate Page      
wzab writes: (Summary) Hi,
Hi,
The function of_get_named_gpiod_flags in older versions of the kernel (up to 4.7.10 - https://elixir.bootlin.com/linux/v4.7.10/source/drivers/gpio/gpiolib-of.c#L75 ) contained an important workaround:
contained an important workaround:
/* .of_xlate might decide to not fill in the flags, so clear it. the Xilinx AXI GPIO driver: https://github.com/Xilinx/linux-xlnx/blob/c2ba891326bb472da59b6a2da29aca218d337687/drivers/gpio/gpio-xilinx.c#L262 ) the random, unitialized value from the stack in of_find_gpio ( https://elixir.bootlin.com/linux/v4.18.13/source/drivers/gpio/gpiolib-of.c#L228 ) is used, which results in random settings of e.g., OPEN DRAIN or OPEN SOURCE mode.
          Re: BUG: corrupted list in p9_read_work      Cache   Translate Page      
syzbot writes: (Summary) Hello,
Hello,
syzbot has tested the proposed patch and the reproducer did not trigger crash:
crash:
Reported-and-tested-by:
syzbot+2222c34dc40b515f30dc@syzkaller.appspotmail.com Tested on: commit: e4ca13f7d075 9p/trans_fd: abort p9_read_work if req status.. git tree: git://github.com/martinetd/linux.git for-syzbot kernel config: https://syzkaller.appspot.com/x/.config?x=fada1c387645ed03 compiler: gcc (GCC) 8.0.1 20180413 (experimental) Note: testing is done by a robot and is best-effort only.
          Re: [Ksummit-discuss] [PATCH 0/2] code of conduct fixes      Cache   Translate Page      
Randy Dunlap writes: On 10/10/18 9:12 AM, Pavel Machek wrote:
These are exactly my thoughts.
Exactly. We have a process and the 4.19-rc4 CoC patch did not follow it. Exactly. We have a process and the 4.19-rc4 CoC patch did not follow it. I probably won't make it to kernel summit this year.) Ditto.
Ditto.

          [PATCH v3 0/5] Clean up huge vmap and ioremap code      Cache   Translate Page      
Will Deacon writes: (Summary) Hi all,
Hi all,
This is version three of the patches I previously posted here: This is version three of the patches I previously posted here: v1: http://lkml.kernel.org/r/1536747974-25875-1-git-send-email-will.deacon@arm.com v2: http://lkml.kernel.org/r/1538478363-16255-1-git-send-email-will.deacon@arm.com v2: http://lkml.kernel.org/r/1538478363-16255-1-git-send-email-will.deacon@arm.com The only changes since v2 are to the commit messages. All feedback welcome,
All feedback welcome,
Will
Will
--->8 Will Deacon (5): ioremap: Rework pXd_free_pYd_page() API arm64: mmu: Drop pXd_present() checks from pXd_free_pYd_table() x86/pgtable: Drop pXd_none() checks from pXd_free_pYd_table() lib/
          Re: [Ksummit-discuss] [PATCH 0/2] code of conduct fixes      Cache   Translate Page      
Pavel Machek writes: (Summary) Hi!
Hi!
few weeks.
These are exactly my thoughts.
These are exactly my thoughts.
better than adding a disclaimer to the new one.
Reverting it then having proper discussion sounds suitable to me. Reverting it then having proper discussion sounds suitable to me. (And it would be nice to have something on the mailing lists, too, as I probably won't make it to kernel summit this year.) Pavel
Pavel
[unhandled content-type:application/pgp-signature]
          Re: kernel BUG at arch/x86/mm/physaddr.c:LINE!      Cache   Translate Page      
Amir Goldstein writes: (Summary) Thanks,
Amir.
Amir.
From 49c4c21b37ccbdc39680b0dc0f1095c1755f5b9a Mon Sep 17 00:00:00 2001 From: Amir Goldstein <amir73il@gmail.com>
Date: Wed, 10 Oct 2018 18:57:50 +0300
Subject: [PATCH] ovl: fix error handling in ovl_verify_set_fh() Subject: [PATCH] ovl: fix error handling in ovl_verify_set_fh() We hit a BUG on kfree of an ERR_PTR()...
We hit a BUG on kfree of an ERR_PTR()...
Reported-by: syzbot+ff03fe05c717b82502d0@syzkaller.appspotmail.com Fixes: 8b88a2e64036 ("ovl: verify upper root dir matches lower root dir") Cc: <stable@vger.kernel.org>
          Re: BUG: corrupted list in p9_read_work      Cache   Translate Page      
syzbot writes: (Summary) Hello,
Hello,
syzbot tried to test the proposed patch but build/boot failed: syzbot tried to test the proposed patch but build/boot failed: failed to checkout kernel repo git://github.com/martinetd/linux.git on commit e4ca13f7d075e551dc158df6af18fb412a1dba0a: failed to run ["git" "checkout" "e4ca13f7d075e551dc158df6af18fb412a1dba0a"]: exit status 128
fatal: reference is not a tree: e4ca13f7d075e551dc158df6af18fb412a1dba0a fatal: reference is not a tree: e4ca13f7d075e551dc158df6af18fb412a1dba0a fatal: reference is not a tree: e4ca13f7d075e551dc158df6af18fb412a1dba0a fatal: reference is not a tree: e4ca13f7d075e551dc158df6af18fb412a1dba0a Tested on:
Tested on:
commit: [unknown]
git tree: git://github.com/martinetd/linux.git e4ca13f7d075e551dc158df6af18fb412a1dba0a
compiler: gcc (GCC) 8.0.1 20180413 (experimental) compiler: gcc (GCC) 8.0.1 20180413 (experimental) compiler: gcc (GCC) 8.0.1 20180413 (experimental)
          Re: [RFC PATCH v4 3/9] x86/cet/ibt: Add IBT legacy code bitmap all ...      Cache   Translate Page      
Yu-cheng Yu writes: On Fri, 2018-10-05 at 10:26 -0700, Eugene Syromiatnikov wrote: That's likely the only way to go.
This bitmap is needed only when the app does dlopen() a non-IBT .so file. Most applications do not need it. Can't we let dlopen mmap() the bitmap when needed and pass it to the kernel?
and pass it to the kernel?
Yu-cheng
Yu-cheng
Yu-cheng

          Re: BUG: corrupted list in p9_read_work      Cache   Translate Page      
Dominique Martinet writes: (Summary) ibv_devinfo should list an interface if you have the userspace library that should have come with rxe_cfg.
(specifically, my VM uses /etc/libibverbs.d/rxe.driver to point to the lib, and /usr/lib64/libibverbs/librxe-rdmav16.so the lib itself) lib, and /usr/lib64/libibverbs/librxe-rdmav16.so the lib itself) Once tools like ibv_devinfo list the interface, it means syzkaller can use it, and very probably means the kernel can as well; I've never seen syzkaller use any library call but I'm not even sure I would know how to create a qp without libibverbs, would standard stuff be OK ?
libibverbs, would standard stuff be OK ?
I think the interface improved quite a bit since I last looked at it so I'll need a bit of time to figure it out again but I'll send you a simple conection with a few messages soonish™
simple conection with a few messages soonish™
[1] https://github.com/cea-hpc/mooshika
[1] https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1211716.html https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1211716.html
          Re: kernel BUG at arch/x86/mm/physaddr.c:LINE!      Cache   Translate Page      
Dmitry Vyukov writes: (Summary) wrote: Cc+: Miklos
It seems reasonable to ignore arch/.*/mm/physaddr.c as suspected guilty file in future -- we already ignore everything related to kmalloc/kfree and this is called from kfree.
I've made the corresponding change to syzkaller:
https://github.com/google/syzkaller/commit/ba8cd6d708b97d6be4f9164758b6a7c690d252b2 https://github.com/google/syzkaller/commit/ba8cd6d708b97d6be4f9164758b6a7c690d252b2 Thanks for re-routing this one, Thomas!
Thanks for re-routing this one, Thomas!
Thanks for re-routing this one, Thomas!
For more options, visit https://groups.google.com/d/optout. For more options, visit https://groups.google.com/d/optout.
          Re: [RFC PATCH 0/7] Introduce thermal pressure      Cache   Translate Page      
Lukasz Luba writes: (Summary) Hi Thara,
Hi Thara,
I have run it on Exynos5433 mainline.
When it is enabled with step_wise thermal governor, some of my tests are showing ~30-50% regression (i.e. hackbench), dhrystone ~10%.
dhrystone ~10%.
Could you tell me which thermal governor was used in your case? Is it single threaded compute-intensive?
Is it single threaded compute-intensive?
Regards,
Lukasz
Lukasz
On 10/09/2018 06:24 PM, Thara Gopinath wrote:
create mode 100644 kernel/sched/thermal.h
create mode 100644 kernel/sched/thermal.h

          Re: [PATCH v7 3/6] seccomp: add a way to get a listener fd from ptrace      Cache   Translate Page      
Paul Moore writes: (Summary) However, from what I have seen, this approach looks very ptrace-y to me (I imagine to others as well based on the comments) and because of this I think ensuring the usual ptrace access controls are evaluated, including the ptrace LSM hooks, is the right thing to do.
right thing to do.
If I've missed something, or I'm thinking about this wrong, please educate me; just a heads-up that I'm largely offline for most of this week so responses on my end are going to be delayed much more than usual.
usual.
[1]: https://lore.kernel.org/lkml/CAG48ez3R+ZJ1vwGkDfGzKX2mz6f=jjJWsO5pCvnH68P+RKO8Ow@mail.gmail.com/ [1]: https://lore.kernel.org/lkml/CAG48ez3R+ZJ1vwGkDfGzKX2mz6f=jjJWsO5pCvnH68P+RKO8Ow@mail.gmail.com/ [1]: https://lore.kernel.org/lkml/cover.1536342881.git.yi.z.zhang@linux.intel.com So it looks like your current logic is just working around the bit then since it just allows for reserved DAX pages.
since it just allows for reserved DAX pages.
since it just allows for reserved DAX pages.
since it just allows for reserved DAX pages.

          Re: kernel BUG at arch/x86/mm/physaddr.c:LINE!      Cache   Translate Page      
Thomas Gleixner writes: On Wed, 10 Oct 2018, syzbot wrote:
On Wed, 10 Oct 2018, syzbot wrote:
Cc+: Miklos
Cc+: Miklos
https://goo.gl/tpsmEJ#testing-patches
https://goo.gl/tpsmEJ#testing-patches

          kernel BUG at arch/x86/mm/physaddr.c:LINE!      Cache   Translate Page      
syzbot writes: (Summary) 3d 01 f0 ff ff 0f 83 cb 08 fc ff c3 66 2e 0f 1f 84 00 00 00 00 RSP: 002b:00007ffc88122678 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00000000004418e9 RDX: 00000000200000c0 RSI: 0000000020000000 RDI: 0000000000400000 RBP: 00007ffc881226c0 R08: 0000000020000100 R09: 0000000000000100 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000003 R13: ffffffffffffffff R14: 0000000000000000 R15: 0000000000000000 Modules linked in: ---[ end trace 25e838f694c8a24f ]--- RIP: 0010:__phys_addr+0xff/0x120 arch/x86/mm/physaddr.c:22 Code: 3c 02 00 75 31 4c 8b 25 ff c3 ee 07 48 89 de bf ff ff ff 1f e8 a2 7a 46 00 49 01 dc 48 81 fb ff ff ff 1f 76 a7 e8 61 79 46 00 <0f>
          New user got unexpected error messages in v5.0.0      Cache   Translate Page      

When I started kicad 5.0.0 this morning, an error dialog box appeared saying:
20181010_kicad_assertion_failed1
Dialog Box:
/build/kicad-5.0.0+dfsg1/common/project.cpp(83): assert "m_project_name.GetExt() == ProjectFileExten

Backtrace

So I clicked the [Copy to Clipboard] button and got this text:

ASSERT INFO:
/build/kicad-5.0.0+dfsg1/common/project.cpp(83): assert "m_project_name.GetExt() == ProjectFileExten

BACKTRACE:
[1] wxEntry(int&, wchar_t**)
[2] __libc_start_main
[3] _start

After clicking the [Continue] button on the dialog, kicad gave me another dialog box error message:


So I clicked the [Copy] button and got this text:

09:21:34: Cannot set locale to language “English (U.S.)”.
09:21:34: locale ‘en_US’ cannot be set.
09:21:37: Cannot set locale to language “English (U.S.)”.
09:21:37: locale ‘en_US’ cannot be set.
09:25:32: file ‘/home/george/kicad4/555-PWM-relay/555_PWM_AC/555_PWM_AC.sch’, line 1: ‘=’ expected.
09:25:32: file ‘/home/george/kicad4/555-PWM-relay/555_PWM_AC/555_PWM_AC.sch’, line 2: ‘=’ expected.
09:25:32: file ‘/home/george/kicad4/555-PWM-relay/555_PWM_AC/555_PWM_AC.sch’, line 3: ‘=’ expected.
09:25:32: file ‘/home/george/kicad4/555-PWM-relay/555_PWM_AC/555_PWM_AC.sch’, line 4: ‘=’ expected.
09:25:32: file ‘/home/george/kicad4/555-PWM-relay/555_PWM_AC/555_PWM_AC.sch’, line 5: ‘=’ expected.

[ lines 6 through 256 omitted because I don’t see any new information ]

09:25:32: file ‘/home/george/kicad4/555-PWM-relay/555_PWM_AC/555_PWM_AC.sch’, line 255: ‘=’ expected
09:25:32: file ‘/home/george/kicad4/555-PWM-relay/555_PWM_AC/555_PWM_AC.sch’, line 256: ‘=’ expected

Here is my kicad version info. from [Help]…[About Kicad}:

Application: kicad
Version: 5.0.0+dfsg1-2~bpo9+1, release build
Libraries:
wxWidgets 3.0.4
libcurl/7.52.1 OpenSSL/1.0.2l zlib/1.2.8 libidn2/2.0.5 libpsl/0.17.0 (+libidn2/0.16) libssh2/1.7
Platform: Linux 4.17.0-0.bpo.3-amd64 x86_64, 64 bit, Little endian, wxGTK
Build Info:
wxWidgets: 3.0.2 (wchar_t,wx containers,compatible with 2.8) GTK+ 2.24
Boost: 1.62.0
OpenCASCADE Community Edition: 6.8.0
Curl: 7.52.1
Compiler: GCC 6.3.0 with C++ ABI 1010

Build settings:
USE_WX_GRAPHICS_CONTEXT=OFF
USE_WX_OVERLAY=OFF
KICAD_SCRIPTING=ON
KICAD_SCRIPTING_MODULES=ON
KICAD_SCRIPTING_WXPYTHON=ON
KICAD_SCRIPTING_ACTION_MENU=ON
BUILD_GITHUB_PLUGIN=ON
KICAD_USE_OCE=ON
KICAD_USE_OCC=OFF
KICAD_SPICE=OFF

Here is my operating system info.:
$ uname -a
Linux neptune1 4.16.0-0.bpo.1-amd64 #1 SMP Debian 4.16.5-1~bpo9+1 (2018-05-06) x86_64 GNU/Linux
george@neptune1:~/linux/neptune5.3

de KInfoCenter - Info Center:

Software
Neptune 5.0
KDE Plasma Version 5.12.5
KDE Frameworks Version 5.46.0
Qt Version 5.7.1
Kernel Version 4.16.0-0bpo1-amd64
OS Type 64-bit

Hardware
Processors: 4-AMDAB-5600K APU with Radeon™ HD Graphics
Memory: 6.9GiBN of RAM
Swap: 3.5 GiB of Swap
Graphics
OpenGL Renderer: AMD ARUBA (DRM 2.5.0/4.16.0-0.bpo.1-amd64, LLVM 5.0.1)
Xorg-Server/Wayland: 1.19.2
Mesa 3D: 17.3.9

OS
OS Type: 64-bit
OS Version: Neptune5

The message surprised me because I had no intention of opening this project,
555_PWM_AC.

The following text appeared in the right-hand-side of the main kicad GUI on a white
background:

Project name:
/home/george/kicad4/555-PWM-relay/555_PWM_AC/555_PWM_AC.sch

Maybe I simply didn’t properly close this project – could that be it? No, actually, I don’t recall ever opening it w/ kicad 5.0.0.
EDIT: When I tried to open the project I had last been working on
(KiCad5.0.0_dfsg1-2-bpo9+1//home/george/kicad5/BuildElectronicCircuits.demo/BuildElectronicCircuits_demo.pro
I got another error message in another dialog box,

If there’s anything I can do to fix kicad, please let me know.

Russ


          Comment on Linux Has a Code of Conduct and Not Everyone is Happy With it by Calisto94      Cache   Translate Page      
Now, probably even the highly-critized macOS will be more free than the SJW infiltrated Linux kernel development. But not all servers in the world could anyhow be moved over to using macOS, let alone Unix/BSD....
          Comment on Linux Has a Code of Conduct and Not Everyone is Happy With it by Calisto94      Cache   Translate Page      
Sadly, you're right. The Linux kernel development might become a significant problem, now as "post-meritocratic" order will be enforced and even code of non-skilled coders can't be rejected from being added to the kernel because only taking good code into it would "reward »bad people« for writing good code" (o-tone Coraline Ada Ehmke...) (as if any good coder who can do more than basic Ruby such as him/her would be a bad person -_-)...
          caf_kernel_msm - This is simply a clone of some CodeAurora kernel/msm      Cache   Translate Page      
This is simply a clone of some CodeAurora kernel/msm.git branches. I've made no changes here. It's simply for easier reading of commits. I run a cron job to keep it up to date. If there is a particular branch you'd like to see from kernel/msm.git up here, just let me know.

          android_kernel_hp_tenderloin      Cache   Translate Page      

          android_kernel_essential_msm8998      Cache   Translate Page      

          Canonical lançou um Kernel Live Patch para corrigir L1TF e SpectreRSB      Cache   Translate Page      
Canonical lançou um Kernel Live Patch para corrigir L1TF e SpectreRSB

A Canonical lançou um novo Kernel Live Patch para corrigir L1TF e SpectreRSB, além de outras falhas. Confira os detalhes dessa atualização.

Leia o restante do texto "Canonical lançou um Kernel Live Patch para corrigir L1TF e SpectreRSB"

O post Canonical lançou um Kernel Live Patch para corrigir L1TF e SpectreRSB apareceu primeiro em Blog do Edivaldo.


          Novo Patch do Kernel Corrige 2 Falhas de Segurança no Debian      Cache   Translate Page      
Novo Patch do Kernel Corrige 2 Falhas de Segurança no Debian

O projeto Debian lançou mais uma atualização, esse novo patch do Kernel corrige 2 falhas de segurança no Debian. Confira os detalhes desse importante update.

Leia o restante do texto "Novo Patch do Kernel Corrige 2 Falhas de Segurança no Debian"

O post Novo Patch do Kernel Corrige 2 Falhas de Segurança no Debian apareceu primeiro em Blog do Edivaldo.


          Bude příští kernel napsaný v Go?      Cache   Translate Page      
Skupina výzkumníků z MIT, mezi nimiž je i známý Robert Tappan Morris, autor pravděpodobně prvního červa šířícího se přes Internet (1988) a první člověk, který byl za takovou činnost odsouzen, prezentovala tento týden výzkumnou zprávu zabývající...
          Need Help! Robofill 290 does not start!!!      Cache   Translate Page      
Greetings, when turning on the robofil290 machine on the screen only the following messages are displayed: CSL kernel. Interface revision 2 This...
          Linux Kernel Patches Posted For Streebog - Crypto From Russia's FSB      Cache   Translate Page      
Just months after the controversial Speck crypto code was added to the Linux kernel that raised various concerns due to its development by the NSA and potential backdoors, which was then removed from the kernel tree, there is now Russia's Streebog that could be mainlined...
          Windows 10十月更新迎KB4464330更新 版本号为Build 17763.55      Cache   Translate Page      

微软今天面向windows 10十月更新(Version 1809)发布了新的补丁,用户可以通过Windows Update方式自动获得,也可以通过直接下载地址方式手动更新。在安装KB4464330累积更新之后,最新版本号升至Build 17763.55,重点解决了误删用户配置文件的问题。

访问:

微软中国官方商城 - Windows


Windows 10十月更新迎KB4464330更新 版本号为Build 17763.55

此外该更新还附带了一些安全修复程序以及常规性能改进,在更新日志中写道:“修复了错误的时间计算可能会导致‘删除超过指定天数的用户配置’组策略过早删除用户配置文件的情况。”

今天的更新也对 Microsoft Graphics Component, Microsoft JET Database Engine, Windows Wireless Networking, Windows Kernel, Microsoft Scripting Engine, Internet Explorer, Microsoft Edge, Windows Media Player, Windows Storage and Filesystems, Windows Peripherals, Windows linux和Windows MSXML进行了修复改进。

下载:

32-bit (x86) and 64-bit


          Hadoop Needs To Be A Business, Not Just A Platform      Cache   Translate Page      

It is safe to say that a little more than a decade ago, when the clone of Google’s MapReduce and Google File System distributed storage and computing platform was cloned at Yahoo and offered up to the world as a way to transform the nature of data analytics at scale, that we all had much higher hopes for the emergence of platforms centered around Hadoop that would change enterprise, not just webscale, computing.

It has been a lot tougher to build the road to enterprise customers and therefore profits, and the reason is simple. Databases are extremely sticky and very hard to change, even when the promise of extremely cheap storage at least by the standards of the mid-2000s is dangled like a juicy carrot. Hadoop, which became the name of a collection of mostly open source programs dealing with data storage and analytics at scale, has been brilliantly and carefully evolved into a number of different platforms by the likes of Cloudera, Hortonworks, MapR Technologies, and even IBM for a while.

But Hadoop has remained a complex if sophisticated platform aimed at the upper echelons of computing, suitable for the Global 5000 customers that were once on the bleeding edge, four or five decades ago, with IBM mainframes for transaction processing. The pace of technological change has accelerated much faster than Moore’s Law, and there are so many ways to skin the analytics cat that it is, frankly, as ridiculous as it is exciting and interesting. Enthusiasm tends to run ahead of practicalities, which is why old technologies persist. The question we have as we contemplate the merger between Cloudera and Hortonworks, arguably the largest commercial distributors and the only two who have made it to public offerings to investors on Wall Street, is whether or not their momentum is enough that Hadoop will be able to evolve and become a profitable business.

That is the central question, and while one might argue that merging two customer bases, two code bases with different licensing philosophies and some radically different approaches to storing and querying data, and two distinct companies so they stop fighting each other will make for a better and stronger Hadoop platform. It is certainly not a foregone conclusion that Hadoop as a business is as good as Hadoop as a platform, and the whole premise of a commercial open source distribution is that it has to be a good business so the platform can be reinvested in to keep it improving and evolving.

There are very few platforms that have succeeded in this regard, and Red Hat, with its linux server, JBoss application middleware, OpenStack cloud controller, and OpenShift Kubernetes container orchestrator, is really the only good example worth bringing up from the open source realm. Nothing else even comes close. If Red Hat had created a Hadoop distribution, as many of us thought it should have, or bought one, as it certainly could have, it is probable that Cloudera and Hortonworks would have never become public companies, which allowed their investors, who collectively plowed $1.04 billion into the former and $248 million into the latter, to cash out. (Intel’s $740 million infusion into Cloudera in 2014 was just an example of the hubris and folly that the chip giant can indulge in thanks to its virtual monopoly in PC and server chips. It happens to all big tech companies that create large profit pools. This list of such acquisitions and partnerships by IBM is long, just as an example.)

As a venture capital harvesting machine, Hadoop has been brilliant. Don’t get us wrong. And from the humble beginnings of the MapReduce data chunking and chewing algorithm and the Hadoop Distributed File System, the Hadoop platform has grown into a vast ecosystem of tools that mirrors all of the wonderful things that have come to surround the Linux kernel and turned it into a proper operating system that can span everything from a smartphone to a supercomputer. Hadoop is ornate, and sometimes baroque, and has so many variations on the themes for everything from data storage to SQL and other kinds of database and data warehouse overlays to different distributed computation models to a layer for in-memory processing and machine learning. It is a very large Swiss Army knife. That is Hadoop’s best feature, and it has also been its curse. Perhaps now, with the two largest Hadoop players merging, the Hadoop stack can be pruned a bit and better optimized for the workloads of the 2020s.

That, we presume, is the idea behind the merger between Cloudera and Hortonworks. The two companies also want to remove costs and probably remove some of the intense competition on pricing to get the combination to profitability, as is expected from every public company. (Will it be called CloudWorks? HortonEra? Something different? Or just Cloudera? Probably not Hortonworks.) And it looks like the combined Hadoop distributions have a path to profitability, if the trend lines hold.

That said, these two companies have burned through a tremendous amount of money to get here, and in the past six and a half years where we have visibility into the numbers, the businesses have indeed grown, but at tremendous cost. Both Cloudera and Hortonworks had models that showed they would grow a lot faster than they actually did, and the slower growth has pushed out the point of profitability ahead of them every year. Adding more and more blades to the Hadoop Swiss Army knife has been costly, and to their credit, they have done innovative things that have kept Hadoop relevant as conditions in the market have changed dramatically in the past decade. What they are trying to do is extremely difficult, and we have nothing but tremendous respect for the effort that some of the smartest people we know in infrastructure and business have put in. But the numbers are not pretty, even if they are getting rosier here in 2018 and looking out into 2019 and 2020.

In fact, it might have been better for these two companies to have merged a few years back, cleaned everything up, got the synergies reckoned, and be going public right now.

Cloudera got the early jump as the dominant revenue generator of the Hadoop stack, but in recent years, Hortonworks has been catching up fast. Take a look:


Hadoop Needs To Be A Business, Not Just A Platform

The bars at the far right show figures for the first half of calendar 2018, so don’t mistakenly think revenues have dropped.

Here is our analysis in tabular form, so you can see the numbers yourself:


Hadoop Needs To Be A Business, Not Just A Platform

Clearly sales growth has cooled for Cloudera it only grew 26 percent in the past two quarters compared to the same period 12 months earlier and down from the 42 percent growth rate in 2017 while Hortonworks is still humming along at 40 percent growth here in 2018 thus far. The combined companies is probably the best way to reckon the overall growth rate, and for the first half that is 32 percent growth for $378 million in sales. There is no reason to believe that the combination cannot break through $800 million in sales in calendar 2019 and push up through $1 billion in 2020.

If you do the math, Cloudera has raked in $1.28 billion in revenues in the past six and a half years, while Hortonworks only brought in $808 million. Add in the venture capital of $1.31 billion in venture capital, plus $225 million that Cloudera raised in early 2017 for its IPO and the $100 million that Hortonworks raised in late 2014 from its IPO, and the total pile of cash that has come to the pair is $3.69 billion. Hortonworks still has $86 million of cash and Cloudera still has $440.1 million. But over that same time period, Cloudera has booked cumulative losses of $1.19 billion and Hortonworks has cumulative losses of $979 million, for a total of $2.16 billion. Both separately and together, these companies are burning the wood a lot faster than they can cut it. This chart shows it visually:


Hadoop Needs To Be A Business, Not Just A Platform

But the financial situation is getting better, as the data shows. The Cloudera and Hortonworks presentations accompanying the merger announcement use trailing twelve month data, which is suitable but we mixed the quarterly data above because it has a longer trend line. Here is what the profit top-level financials look like:


Hadoop Needs To Be A Business, Not Just A Platform

The combined companies, on a trailing twelve month basis, have $720 million in revenues, and gross margins are pretty good at 74 percent. A software company with a legacy installed base can pull in 85 percent to 95 percent gross margins, and aside from the elimination of redundant costs and other synergies that Cloudera and Hortonworks are talking about eliminating, the reduction in competition is going to help, too. No one is talking about that, of course, but that is no doubt part of the thinking behind the merger. There is some cross-selling that is possible to boost revenues, but we think the reduction in competition is a bigger deal. Rationalizing the very different licensing models Hortonworks is pure open source with subscription support, while Cloudera is open core with support plus enterprise add-ons with subscription licenses for key features is not going to be easy, and many products and projects will have to be merged or picked one over the other. Still, even with the $125 million in synergies removed, the combined company will move closer towards profitability, and with a reasonable 30 percent revenue growth rate, the new entity should break through $1 billion and be profitable in 2020. And that is precisely the plan:


Hadoop Needs To Be A Business, Not Just A Platform

To be precise, the combined Cloudera-Hortonworks is telling Wall Street that it can get above $1 billion in sales and have gross margins about 75 percent and operating margins above 10 percent for calendar 2020. Which implies that it will have actual profits, if all goes well.

The total addressable market is expanding, too, and that will help. Here is how Cloudera and Hortonworks see the opportunity out in front of them:


Hadoop Needs To Be A Business, Not Just A Platform

The core market that Hadoop is chasing is comprised of three different segments, according to Cloudera-Hortonworks, and will grow at a compound annual growth rate of 21 percent between 2017 and 2022, from $12.7 billion to $32.3 billion. Within that, cognitive and artificial intelligence workloads represent a $14.3 billion opportunity in 2022, $4.9 billion for advanced and predictive analytics software, and $13.2 billion for dynamic data management systems (what we would call modern storage). In addition to that, the Hadoop platform is also chasing relational and non-relational database management systems and data warehouses, which is another $51 billion opportunity in 2022, for a total TAM of $83 billion. Even a small slice of this, which is what Hadoop currently gets today, could be billions of dollars by then. (We shall see.)

The deal for the merger of the two companies is surprisingly simple. Shareholders in Hortonworks will get 1.305 shares in Cloudera and Cloudera will be the remaining company in fact, if not necessarily in name. This means that Cloudera shareholders will own 60 percent of the combined company and Hortonworks shareholders will own the remaining 40 percent. The combined companies had a fully diluted equity value of $5.2 billion before the merger was announced. At the time the deal was announced, the combined firms had more than $500 million in cash, no debt, and 2,500 customers who largely do not overlap. There are more than 120 customers who spend $1 million a year and another 800 customers who spend more than $100,000 a year for subscriptions and such.


          Aromatherapy Bath Bomb Set of 4 | Gift under 15      Cache   Translate Page      

$14.99

Refreshing and rejuvenating, these aromatherapy bath bombs are made with pure essential oils. Perfect to give as gift, use for showers or even use it as decoration in your bathroom! They are fabulous, fizzy and fun! Use one to enhance your bath experience. Fill your bathtub with warm water, drop in the Bath Bomb and lay back to enjoy. These mini bath bombs are perfect for pedicure as well.

Give something beautiful, natural and handmade.

This Gift Set will include 4 assortment of our most popular bath bombs.


We make our bath bombs with Sea salt and pure essential oils so they will smell fabulous even before you open the package. Relax at the end of the day with warm bath or foot soak. Your skin will feel smooth and soft after the bath and you will sleep well.

Ingredients:
Baking Soda, Citric Acid, Sea Salt, SLSA, Apricot Kernel Oil, Poly 80, Therapeutic Grade Essential Oils, Natural Color & Witch Hazel


          Reddit: Linus' Behavior and the Kernel Development Community      Cache   Translate Page      
submitted by /u/tlitd
[link] [comments]
          Internship- Product Development- VM Monitor Group - VMware - Palo Alto, CA      Cache   Translate Page      
VMware is a global leader in cloud infrastructure and business mobility. Excellent knowledge of OS kernel internals, including memory management, resource...
From VMware - Mon, 01 Oct 2018 19:00:23 GMT - View all Palo Alto, CA jobs
          Apricot in Brandy Marzipan      Cache   Translate Page      
Apricot in Brandy Marzipan

Apricot in Brandy Marzipan

Sugar, Cocoa Mass, Almonds (11%), Apricot Kernels, Apricots (10%), Glucose Syrup, Alcohol, Cocoa Butter, Brandy (0.9%), Milk Fat, Emulsifier (Rapeseed Lecithins), Thickener (Pectin), Flavouring, Acid (Citric Acid), Preservative (Sorbic Acid, Potassium Sorbate). Dark Chocolate contains 50% Cocoa Solids minimum. Alcohol: 1.5% May contain traces of Nuts. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten ✖ Contains Milk or Milk Derivatives ✔ Contains Eggs or Egg Derivatives ✖ Contains Peanuts ✖ Contains Other Nuts ✔ Contains Soya ✖ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✔


          Blueberry in Vodka Marzipan      Cache   Translate Page      
Blueberry in Vodka Marzipan

Blueberry in Vodka Marzipan

Sugar, Cocoa Mass, Blueberries 13%, Almonds (11%), Apricot Kernels, Apricots (9%), Glucose Syrup, Vodka 3%, Cocoa Butter, Milk Fat, Thickener (Pectin), Emulsifier (Rapeseed Lecithin), Flavouring, Acid (Citric Acid), Preservatives (Sorbic Acid, Potassium Sorbate), Natural Flavouring. Dark Chocolate contains 50% Cocoa Solids minimum. Alcohol: 1.7% May contain traces of Nuts. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten ✖ Contains Milk or Milk Derivatives ✔ Contains Eggs or Egg Derivatives ✖ Contains Peanuts ✖ Contains Other Nuts ✔ Contains Soya ✖ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✔


          Cherry in Rum Marzipan      Cache   Translate Page      
Cherry in Rum Marzipan

Cherry in Rum Marzipan

Sugar, Cocoa Mass, Cherries (12%), Almonds (11%), Apricot Kernels, Glucose Syrup, Spirits (Rum (0.5%), Cocoa Butter, Milk Fat, Alcohol, Emulsifier (Rapeseed Lecithins), Thickener (Pectin), Preservative (Sorbic Acid/Potassium Sorbate) Acid (Citric Acid). Dark Chocolate contains 50% Cocoa Solids minimum. Alcohol: 1.4% May contain traces of Nuts. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten ✖ Contains Milk or Milk Derivatives ✔ Contains Eggs or Egg Derivatives ✖ Contains Peanuts ✖ Contains Other Nuts ✔ Contains Soya ✖ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✔


          Software Developer - Wafer Space Semiconductor Technologies Private Limited - Bengaluru, Karnataka      Cache   Translate Page      
Must have experience with Architecture level with multiple SW technology development and e2e product integration and product scope(kernel, MW, android framework...
From Monster IN - Tue, 09 Oct 2018 14:34:36 GMT - View all Bengaluru, Karnataka jobs
          PostgreSQL Database Administrator - Upgrade - Montreal, WI      Cache   Translate Page      
Solid Linux fundamentals including kernel and OS tuning, as they relate to DB performance and security. Upgrade is a consumer credit platform that is changing...
From Upgrade - Wed, 22 Aug 2018 22:02:31 GMT - View all Montreal, WI jobs
          Google's Pixel 3 is the first Android device to ship with new CFI kernel protections      Cache   Translate Page      
Google adds Control Flow Integrity protection to the Android kernel.
          Software Developer - Kernel and Hardware Specialist - Arctic Wolf Networks - Waterloo, ON      Cache   Translate Page      
Experience with Linux networking, Linux bridge/interfaces and Open vSwitch. Your experience in debugging hardware, BIOS, Linux kernel, and device drivers will...
From Arctic Wolf Networks - Sun, 30 Sep 2018 14:09:37 GMT - View all Waterloo, ON jobs
          Embedded Software Developer - Genetec - Montréal, QC      Cache   Translate Page      
Drivers (Windows and Linux). System services and utilities (Windows and Linux). Knowledge of the system (kernel) programming model under Windows and Linux....
From Genetec - Fri, 10 Aug 2018 18:10:43 GMT - View all Montréal, QC jobs
          Internship- Product Development- VM Monitor Group - VMware - Palo Alto, CA      Cache   Translate Page      
VMware is a global leader in cloud infrastructure and business mobility. Excellent knowledge of OS kernel internals, including memory management, resource...
From VMware - Mon, 01 Oct 2018 19:00:23 GMT - View all Palo Alto, CA jobs
          Formating output      Cache   Translate Page      
Hi All, from Ansible I can gather and grep the following as an example "ansible_hostname": "aspecialhost", "ansible_kernel": "3.10.0-862.11.6.el7.x86_64", ...
          LXer: CentOS 6 and RHEL 6 Get Important Kernel Security Update for FragmentSmack Flaw      Cache   Translate Page      
Published at LXer: CentOS maintainer Johnny Hughes and Red Hat announced the availability of an important Linux kernel security update for the CentOS Linux 6 and Red Hat Enterprise Linux 6...
          Google failed to justify the Pixel 3 XL's massive notch - The Verge      Cache   Translate Page      

The Verge

Google failed to justify the Pixel 3 XL's massive notch
The Verge
Google's Pixel 3 and Pixel 3 XL arrived yesterday without too much fanfare. After all, the devices leaked pretty much in entirety over the course of the last two months, leaving little to the imagination when Google hardware chief Rick Osterloh came on ...
Chrome OS may be the 2-in-1 solution we've been waiting forEngadget
Hands-on: Google Pixel Slate brings the true Chrome OS tablet debut [Video]9to5Google
Google's Pixel 3 is the first Android device to ship with new CFI kernel protectionsZDNet
Digital Trends -Tom's Guide -Gizmodo -Android Police
all 189 news articles »

          Instalar Ubuntu en Windows 10 ahora es más sencillo, te contamos cómo      Cache   Translate Page      

Ubuntu es una excelente distribución basada en el kernel Linux, el uso de este sistema es relativamente fácil, y su interfaz gráfica está muy bien diseñada, es por eso que cada vez este tipo de distribuciones son más populares. Si te interesa instalar o simplemente probar un nuevo OS como este, quédate, en este artículo […]

The post Instalar Ubuntu en Windows 10 ahora es más sencillo, te contamos cómo appeared first on Soluciones WinDroid.


          Mageia dostala aktualizáciu      Cache   Translate Page      

Piateho októbra (října) oznámil Donald Stewart novú verziu distribúcie Mageia s aktualizáciou kernelu Linuxu a prídavných softvérových balíčkov.


          Software-Analyse: Bpftrace soll Dtrace-Nachfolger für Linux werden      Cache   Translate Page      
Das Analysewerkzeug Dtrace steht mittlerweile unter der GPL und könnte in den Linux-Kernel wandern. Mit Bpftrace steht nun aber eine Alternative bereit, die auf aktueller Kernel-Technik aufbaut und als Nachfolger für Dtrace bezeichnet wird. (Linux-Kernel, Virtualisierung)
          Google failed to justify the Pixel 3 XL's massive notch - The Verge      Cache   Translate Page      

The Verge

Google failed to justify the Pixel 3 XL's massive notch
The Verge
Google's Pixel 3 and Pixel 3 XL arrived yesterday without too much fanfare. After all, the devices leaked pretty much in entirety over the course of the last two months, leaving little to the imagination when Google hardware chief Rick Osterloh came on ...
Hands-on: Google Pixel Slate brings the true Chrome OS tablet debut [Video]9to5Google
Chrome OS may be the 2-in-1 solution we've been waiting forEngadget
Google's Pixel 3 is the first Android device to ship with new CFI kernel protectionsZDNet
Tom's Guide -Forbes -Digital Trends -Android Headlines
all 206 news articles »

          [免費] Windows 10 1809 重新上架後就火速推出更新 KB4464330      Cache   Translate Page      
Microsoft 的 Windows 10 1809 剛一推出就出了個包 有一定機率會清空使用者 c:\users\username\ 下的文件資料 [重要] 微軟宣布暫時撤回對所有使用者的 Windows 10 2018 年 10 月更新(版本 1809 RS5)  在不久之前才更正這些問題後重新上架開放更新 並且在不到 24 小時的時間之內,對使用者推播了 1809 版專用的更新 KB4464330   主要的更新內容如下 Addresses an issue where an incorrect timing calculation may prematurely delete user profiles on devices subject to the "Delete user profiles older than a specified number of day” group policy. Security updates to Windows Kernel, Microsoft Graphics Component, Microsoft Scripting Engine, Internet Explorer, Windows Storage and Filesystems, Windows Linux, Windows Wireless Networking, Windows MSXML, the Microsoft JET Database Engine, Windows Peripherals, Microsoft Edge, Windows Media Player, and Internet Explorer. 重要內容是第一項~雖然寫的很文言,但說白一點就是在修正之前發生的悲劇事件 所以建議所有 WIN10 1809 使用者都更新它
          Biscuit – a research OS written in Go      Cache   Translate Page      

Biscuit is a monolithic, POSIX-subset operating system kernel in Go for x86-64 CPUs. It was written to study the performance trade-offs of using a high-level language with garbage collection to implement a kernel with a common style of architecture.

With ~38k commits and 8+ years of dev, this has been a massive effort. Find the research paper right here.


          Will Thompson: Wandering in the symlink forest forever      Cache   Translate Page      

Last week, Philip Withnall told me that Meson has built-in support for generating code coverage reports: just configure with -Db_coverage=true, run your tests with ninja test, then run ninja coverage-{text,html,xml} to generate the report in the format of your choice. The XML format is compatible with Cobertura’s output, which is convenient since Endless’s Jenkins is already configure to consume Cobertura XML generated by Autotools projects using our EOS_COVERAGE_REPORT macro. So it was a simple matter of adding gcovr to the build enviroment, running ninja coverage-xml after the tests, and moving the report to the right place for Jenkins to find it. It worked well on the projects I tested, so I decided to enable it for all Meson projects built in our CI. Sure, I thought, it’s not so useful for our forks of GNOME and third-party projects, but it’s harmless and saves adding per-project config, right?

Fast-forward to yesterday, when someone noticed that a systemd build had been stuck on the ninja coverage-xml step for 16 hours. Uh oh.

It turns out that gcovr follows symlinks when scanning for coverage files, but didn’t check for cycles. systemd’s test suite generates a fake sysfs tree, with many circular references via symlinks. For example, there are 64 self-referential ttyX trees:

$ ls -l build/test/sys/devices/virtual/tty/tty1
total 12
-rw-r--r-- 1 wjt wjt    4 Oct  9 12:16 dev
drwxr-xr-x 2 wjt wjt 4096 Oct  9 12:16 power
lrwxrwxrwx 1 wjt wjt   21 Oct  9 12:16 subsystem -> ../../../../class/tty
-rw-r--r-- 1 wjt wjt   16 Oct  9 12:16 uevent
$ ls -l build/test/sys/devices/virtual/tty/tty1/subsystem/tty1
lrwxrwxrwx 1 wjt wjt 30 Oct  9 12:16 build/test/sys/devices/virtual/tty/tty1/subsystem/tty1 -> ../../devices/virtual/tty/tty1
$ readlink -f build/test/sys/devices/virtual/tty/tty1/subsystem/tty1
/home/wjt/src/endlessm/systemd/build/test/sys/devices/virtual/tty/tty1

And, worse, all other ttyY trees are accessible via the symlinks from each ttyX tree. The kernel caps the number of symlinks per path to 40 before lookups fail with ELOOP, but that’s still 6440 paths to resolve, just for the fake ttys. Quite a big number!

The fix is straightforward: maintain a set of visited (st_dev, st_ino) pairs while walking the tree, and prune subtrees we’ve already visited. I tried adding a similar highly self-referential symlink graph to the gcovr test suite, so that it would run in reasonable time if the fix works and essentially never terminate if it does not. Unfortunately, pytest has exactly the same bug: while searching for tests to run, it gets lost wandering in the symlink forest forever.

This bug is a good metaphor for my habit of starting supposedly-quick side-projects.


          Microsoft bringt 60.000 Patente in Open-Source-Konsortium ein      Cache   Translate Page      

Microsoft ist dem Open-Source-Konsortium Open Innovation Network beigetreten. Der Softwareriese stellt den Mitgliedern des Netzwerks 60.000 eigene Patente zur Verfügung.

Das 2005 gegründete Open Invention Network (OIN), zu dem Konzerne wie Google, IBM, Red Hat und Suse gehören, hat sich dem Schutz von Linux und anderer Open-Source-Software vor Patentklagen verschrieben. Jetzt ist auch Softwareriese Microsoft dem Konsortium beigetreten und bringt über 60.000 eigene Patente in die Linux-Vereinigung mit ein, wie Microsoft-Manager Erich Andersen in einem Blogbeitrag erklärt.

Bisher hatten die rund 2.650 Mitglieder der OIN-Community gut 1.300 Patente und Applikationen angehäuft, schreibt ZDNet.com. Die zehntausenden Microsoft-Patente, die den OIN-Mitgliedern jetzt zur Nutzung zur Verfügung stehen, betreffen ältere Open-Source-Technologien wie Android, den Linux-Kernel und Openstack sowie neuere Technologien wie LF Energy und Hyperledger. Was das für Microsoft bedeutet? Allein mit Android-Patenten soll der Konzern bis 2014 rund 3,4 Milliarden US-Dollar erwirtschaftet haben.

Microsoft begräbt Streit mit der Open-Source-Community

Der Schritt kommt daher ziemlich überraschend, wie auch Andersen weiß. Es sei kein Geheimnis, dass es in der Vergangenheit durchaus Reibungspunkte zwischen Microsoft und der Open-Source-Community gegeben habe, wenn es um die Freigabe von Softwarepatenten ging. Bei Microsoft hat es laut dem hochrangigen Microsoft-Manager Scott Guthrie einen grundlegenden Wandel in der Philosophie gegeben. Mit der Öffnung des Patentportfolios für die OIN wolle Microsoft seinen Teil zum Schutz von Open-Source-Projekten vor rechtlichen Problemen beitragen, so Guthrie gegenüber ZDNet.

Zuvor hatte der Softwarekonzern schon 10.000 Patente in den Ring geworfen, um Nutzer seiner Cloudplattform Azure vor möglichen Rechtsstreitigkeiten zu schützen. Außerdem war Microsoft erst in der vergangenen Woche der Anti-Patenttroll-Initiative LOT Network beigetreten. Deren 300 Mitglieder, darunter Amazon, Canon, Cisco, Lenovo, Tesla und SAP verfügen zusammen über rund 1,35 Millionen Patente. Dem LOT Network zufolge sollen schon über 10.000 Unternehmen mindestens einmal von einem Patenttroll verklagt worden sein – die Verteidigung kostet im Schnitt 3,2 Millionen Dollar.

Mehr zum Thema:


          Red Hat Security Advisory 2018-2846-01      Cache   Translate Page      
Red Hat Security Advisory 2018-2846-01 - The kernel packages contain the Linux kernel, the core of any Linux operating system. Issues addressed include a denial of service vulnerability.
          Debian Security Advisory 4313-1      Cache   Translate Page      
Debian Linux Security Advisory 4313-1 - Several vulnerabilities have been discovered in the Linux kernel that may lead to a privilege escalation, denial of service or information leaks.
          Microsoft fixes Window 10 file deletion issue      Cache   Translate Page      
Revised update released to Windows Insider test pilots.

Microsoft claims to have taken care of the inadvertent file deletion issue affecting users upgrading Windows 10, and is rolling out a fixed version to early adopters in its Windows Insider program for further testing.

A few days ago, Microsoft was force to pause the rollout of Windows 10 version 1809, as upgraders complained that files in their computers' Documents, Pictures and Downloads folders had vanished.

The file deletion issue also saw Microsoft pull Windows Server 2019 downloads.

"We have fully investigated all reports of data loss, identified and fixed all known issues in the update, and conducted internal validation," Microsoft's director of program management for Windows servicing and delivery John Cable said.

Cable said Microsoft found that the file deletion happened if users had enabled redirection of Windows user data folders - including Desktop, Documents, Pictures, Screenshots, Videos, Camera roll and more - away from their default location under C:\Users\$UserName.

Some users told Microsoft that the previous April 2018 major Windows Update created duplicate empty copies of these folders if redirection was enabled.

Microsoft decided to remove these duplicate folders through code incorporated into version 1809.

Unfortunately for some users, "that change, combined with another change to the update construction sequence, resulted in the deletion of the original “old” folder locations and their content, leaving only the new 'active' folder intact," Cable said.

Cable also identified another file deletion scenario involving redirection of Known Folders to OneDrive storage whereby contents of the original default directory were erased if they weren't moved.

Furthermore, an early buggy version of the OneDrive client with autosave turned on didn't move files in users Documents and Pictures folders from the old location to the new one.

When users upgraded to Windows 10 1809, the software update dutifully deleted the folders in the original location, including the files they contained, Cable said. Microsoft has issued an updated OneDrive client that moves the files as expected.

Cable claimed that very few users were affected by the file deletion problem, but conceded that "any data loss is serious".

Users who lost data during the upgrade to version 1809 are advised to call 13 20 58 in Australia for free assistance with recovering their files, Microsoft said.

Additions to version 1809

Quietly, Microsoft took the opportunity to fix a bug that saw user profiles being deleted due to an incorrect timing calculation.

The new version also received patches for a range of critical remote code execution vulnerabilities, as part of Microsoft's regular monthly Patch Wednesday security updates.

This includes fixes for system components such as the Windows kernel, graphics and file systems, the wireless networking and peripherals subsystems, along with the JET database engine, Windows Media Player and the Edge and Internet Explorer web browsers among others.

One privilege escalation zero-day, CVE-2018-8453, affects the Windows kernel and is known to have been exploited by nation-state advanced persistent threat groups.

Windows 10 1809 won't be broadly released until early adopters in the Windows Insider program have tested the revised update and provided feedback to Microsoft.

Got a news tip for our journalists? Share it with us anonymously here.

          [Advocacy] Re: ​Microsoft open-sources its patent portfolio      Cache   Translate Page      
Citat: " Keith Bergelt, OIN's CEO, commented on Microsoft's announcement in an interview: "This is everything Microsoft has, and it covers everything related to older open-source technologies such as Android, the Linux kernel, and OpenStack; newer technologies such as LF Energy and HyperLedger, and their predecessor and successor versions." In a conversation, Erich Andersen, Microsoft's corporate vice president and chief intellectual property (IP) counsel -- that is, Microsoft top pate...
          Kernel: Threading, Streebog, USB 3.0, "Thermal Pressure" and More      Cache   Translate Page      
  • A Look At Linux Application Scaling Up To 128 Threads

    Arriving last week in our Linux benchmarking lab was a dual EPYC server -- this Dell PowerEdge R7425 is a beast of a system with two AMD EPYC 7601 processors yielding a combined 64 cores / 128 threads, 512GB of RAM (16 x 32GB DDR4), and 20 x 500GB Samsung 860 EVO SSDs. There will be many interesting benchmarks from this server in the days and weeks ahead. For some initial measurements during the first few days of stress testing this 2U rack server, here is a look at how well various benchmarks/applications are scaling from two to 128 threads.

  • Linux Kernel Patches Posted For Streebog - Crypto From Russia's FSB

    Just months after the controversial Speck crypto code was added to the Linux kernel that raised various concerns due to its development by the NSA and potential backdoors, which was then removed from the kernel tree, there is now Russia's Streebog that could be mainlined.

    The Streebog cryptographic hash was developed by Russia's controversial FSB federal security service and other Russian organizations. Streebog is a Russian national standard and a replacement to their GOST hash function. Streebog doesn't have as much controversy as NSA's Speck, but then again it's not as well known but there is are some hypothetical attacks and some papers have questioned some elements of the design. Streebog is considered to be a competitor to the SHA-3 standard from the NIST.

  • The Linux Kernel In 2018 Finally Deems USB 3.0 Ubiquitous Rather Than An Oddity

    The latest news in the "it's about darn time" section is the Linux kernel's default i386/x86_64 kernel configurations will finally ship with USB 3.0 support enabled, a.k.a. CONFIG_USB_XHCI_HCD.

    For many years now pretty much all Linux distribution vendor kernels have been shipping with CONFIG_USB_XHCI_HCD enabled either built-in or as a module... But built-in is pretty much the best to avoid potential issues at start-up time. As of this week, CONFIG_USB_XHCI_HCD=y is finally set for the default configurations on the x86/x86_64-based kernel builds should you be spinning up a defconfig kernel.

  • "Thermal Pressure" Kernel Feature Would Help Linux Performance When Running Hot

    Linaro engineer Thara Gopinath sent out an experimental set of kernel patches today that introduces the concept of "thermal pressure" to the Linux kernel for helping assist Linux performance when the processor cores are running hot.

    While the Linux CPU frequency scaling code already deals with the event of CPU core(s) overheating as to downclock/limit the frequency, the kernel's scheduler isn't currently aware of the CPU capacity restrictions put in place due to that thermal event.

  • Containers are Linux

    Linux is the core of today’s operating system open source software development, and containers are a core feature of Linux. Linux containers and the Kubernetes community supporting them enable agencies to quickly stand up, distribute and scale applications in the hybrid clouds supporting the IT architecture of today’s digitally transformed government.

    But agencies need more than the speed and flexibility of containers and the power of Kubernetes to take full advantage of today’s hybrid cloud environment. They need open source enterprise software with full lifecycle support and a full complement of hardware certifications to ensure portability across platforms.




Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10
Site Map 2018_08_11
Site Map 2018_08_12
Site Map 2018_08_13
Site Map 2018_08_15
Site Map 2018_08_16
Site Map 2018_08_17
Site Map 2018_08_18
Site Map 2018_08_19
Site Map 2018_08_20
Site Map 2018_08_21
Site Map 2018_08_22
Site Map 2018_08_23
Site Map 2018_08_24
Site Map 2018_08_25
Site Map 2018_08_26
Site Map 2018_08_27
Site Map 2018_08_28
Site Map 2018_08_29
Site Map 2018_08_30
Site Map 2018_08_31
Site Map 2018_09_01
Site Map 2018_09_02
Site Map 2018_09_03
Site Map 2018_09_04
Site Map 2018_09_05
Site Map 2018_09_06
Site Map 2018_09_07
Site Map 2018_09_08
Site Map 2018_09_09
Site Map 2018_09_10
Site Map 2018_09_11
Site Map 2018_09_12
Site Map 2018_09_13
Site Map 2018_09_14
Site Map 2018_09_15
Site Map 2018_09_16
Site Map 2018_09_17
Site Map 2018_09_18
Site Map 2018_09_19
Site Map 2018_09_20
Site Map 2018_09_21
Site Map 2018_09_23
Site Map 2018_09_24
Site Map 2018_09_25
Site Map 2018_09_26
Site Map 2018_09_27
Site Map 2018_09_28
Site Map 2018_09_29
Site Map 2018_09_30
Site Map 2018_10_01
Site Map 2018_10_02
Site Map 2018_10_03
Site Map 2018_10_04
Site Map 2018_10_05
Site Map 2018_10_06
Site Map 2018_10_07
Site Map 2018_10_08
Site Map 2018_10_09
Site Map 2018_10_10