Thursday, June 30, 2016

If you want to run a business in China

...then you will need a Chinese phone number. I.e. a phone number with the country code +86. Your customers will use this number to reach your company, and you will use this number for outgoing calls to them, too.

There are many SIP providers that offer Chinese phone numbers, but not all of them are good. Here is why.

The phone system in China has an important quirk: it mangles Caller ID numbers on incoming international calls. This is not VoIP specific, and applies even to simple mobile-to-mobile calls. E.g., my mobile phone number in Russia starts with +7 953, and if I place a call to almost any other country, they will see that +7 953 XXX XXXX is calling. But, if I call a phone number in China, they will instead see something else, with no country code and no common suffix with my actual phone number.

The problem is that some SIP providers land calls to China (including calls from a Chinese number obtained from their pool) on gateways that are outside China. If you use such provider and call a Chinese customer, they will not recognize you, because the call will be treated as international (even though it is intended to be between two Chinese phone numbers), and your caller ID will be mangled.

As far as I know, there is no way to tell if a SIP provider is affected by this problem, without trying their service or calling their support.

Tuesday, May 24, 2016

Is TSX busted on Skylake, too? No, it's just buggy software

The story about Intel recalling Transactional Synchronization Extensions
from Haswell and Broadwell lines of their CPUs by means of a microcode update has hit the web in the past. But it looks like this is not the end of the story.

The company I work for has a development server in Hetzner, and it uses this type of CPU:


processor : 0
vendor_id : GenuineIntel
cpu family : 6
model  : 94
model name : Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz
stepping : 3
microcode : 0x39
cpu MHz  : 3825.265
cache size : 8192 KB
physical id : 0
siblings : 8
core id  : 0
cpu cores : 4
apicid  : 0
initial apicid : 0
fpu  : yes
fpu_exception : yes
cpuid level : 22
wp  : yes
flags  : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb 
rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology 
nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est 
tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt 
tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch intel_pt 
tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep 
bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 
dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp
bugs  :
bogomips : 6816.61
clflush size : 64
cache_alignment : 64
address sizes : 39 bits physical, 48 bits virtual
power management:


I.e. it is a Skylake. The server is running Ubuntu 16.04, and the CPU has HLE and RTM families of instructions.

One of my recent tasks was to prepare, on this server, an LXC container based on Ubuntu 16.04 with a lightweight desktop accessible over VNC, for "remote classroom" purposes. We already have such containers on other servers, but they were based on Ubuntu 14.04. Such containers work well on this server, too, but it's time to upgrade. In these old containers, we use a regular Xorg server with a "dummy" video driver, and export the screen using x11vnc.

So, I decided to clone the old container and update Ubuntu there. Result: x11vnc, or sometimes Xorg, now crashes (SIGSEGV) when one attempts to change the desktop resolution. The backtrace points into the __lll_unlock_elision() function which is a part of glibc implementation of mutexes for CPUs with Hardware Lock Elision instructions.

This crash doesn't happen when I run the same container on a server with an older CPU (which doesn't have TSX in the first place), or if I try to reproduce the bug at home (where I have a Haswell, with TSX disabled by the new microcode).

So, all apparently points to a bug related to these extensions. Or does it?

The __lll_unlock_elision() function has this helpful comment in it:

  /* When the lock was free we're in a transaction.
     When you crash here you unlocked a free lock.  */

And indeed, there is some discussion of another crash in __lll_unlock_elision(), related to NVidia driver (which is not used here). In that discussion, it was highlighted that an unlock of already-unlocked mutex would be silently ignored if a mutex implementation not optimized for TSX is used, but a CPU with TSX would expose such latent bug. Locking balance bugs are easily verified using valgrind. And indeed:

DISPLAY=:1 valgrind --tool=helgrind x11vnc
...
==4209== ---Thread-Announcement------------------------------------------
==4209== 
==4209== Thread #1 is the program's root thread
==4209== 
==4209== ----------------------------------------------------------------
==4209== 
==4209== Thread #1 unlocked a not-locked lock at 0x9CDA00
==4209==    at 0x4C326B4: ??? (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so)
==4209==    by 0x4556B2: ??? (in /usr/bin/x11vnc)
==4209==    by 0x45A35E: ??? (in /usr/bin/x11vnc)
==4209==    by 0x466646: ??? (in /usr/bin/x11vnc)
==4209==    by 0x410E30: ??? (in /usr/bin/x11vnc)
==4209==    by 0x717D82F: (below main) (libc-start.c:291)
==4209==  Lock at 0x9CDA00 was first observed
==4209==    at 0x4C360BA: pthread_mutex_init (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so)
==4209==    by 0x40FECC: ??? (in /usr/bin/x11vnc)
==4209==    by 0x717D82F: (below main) (libc-start.c:291)
==4209==  Address 0x9cda00 is in the BSS segment of /usr/bin/x11vnc
==4209== 
==4209== 

It is a software bug, not a CPU bug. But still - until such bugs are eliminated from the distribution, I'd rather not use it on a server with a CPU with TSX.

Sunday, May 8, 2016

Root filesystem snapshots and kernel upgrades

On my laptop (which is running Arch), I decided to have periodic snapshots of the filesystem, in order to revert bad upgrades (especially those involving a large and unknown set of interdependent packages) easily. My toolset for this task is LVM2 and Snapper. Yes, I know that LVM2 is kind-of discouraged, and Snapper also supports btrfs, but most of the points below apply to btrfs, too.

Snapper, when used with LVM2, requires not just LVM2, but thinly-provisioned LVM2 volumes. Fortunately, Arch can have root filesystem on such volumes, so this is not a problem.

So, I have /boot on /dev/sda1, LVM on LUKS on /dev/sda2, root on a thinly-provisioned logical volume, and /home on another thinly-provisioned volume. And also swap on a non-thinly-provisioned volume. A separate /boot partition is needed because boot loaders generally don't understand thinly-provisioned LVM volumes, especially on encrypted disks. A separate volume for /home is needed because I don't want my new files in /home to be lost if I revert the system to its old snapshot. The same need to make a separate volume applies to other directories that contain data that should be preserved, but there are no such directories on my laptop. They can appear if I install e.g PostgreSQL.

And now there is a problem. Rollback to a snapshot works, but only if there were no kernel updates between the time when the snapshot was taken and when an attempt to revert was made. The root cause is that the kernel image is in /boot, and loadable modules for it are in /usr/lib/modules. The modules are reverted, but the boot loader still loads a new kernel, which now has no corresponding modules.

There are two solutions: either revert the kernel and its initramfs, too, when reverting the root file system, or make sure that modules are not reverted. I have not investigated how to make the first option possible, even though it would be a perfect solution. However, I have tried to make sure that modules are not reverted, and I am not satisfied with the result.

The idea was to move modules to /boot/modules, and make this location available somehow as /usr/lib/modules. Here "somehow" can mean either a symlink, or a bind mount. A symlink doesn't work, because the kernel upgrade in Arch will restore it back to a directory. A bind mount doesn't work, either. The issue is that, by putting modules on non-root filesystem, one creates a circular dependency between local filesystem mounting and udev (this would apply to a symlink, too).

Indeed, systemd-udevd, on startup, maps the /usr/lib/modules/`uname -r`/modules.alias.bin file into memory. So, now it has a (real) dependency on /usr/lib/modules being mounted. However, mounting local filesystems from /etc/fstab sometimes depends on systemd-udevd, because of device nodes. So, bind-mounting /usr/lib/modules merely from /etc/fstab, using built-in systemd tools, cannot work.

But it can work from a wrapper that starts before the real init:

#!/bin/sh
mount -n /boot              # /dev/sda1 is in devtmpfs and doesn't need udev
mount -n /usr/lib/modules   # there is still a line in fstab about that
exec /sbin/init "$@" 

But that's ugly. In the end, I removed the wrapper, installed an old known-working "linux" package, made a copy of the kernel, its initramfs and modules, upgraded the kernel again, and put the saved files back, so that they are now not controlled by the package manager. So now I have a known good kernel down in the boot menu, and knowledge that its modules will always be present in my root filesystem if I don't revert further than up to today's state.

And now one final remark. Remember that I said: "The same need to make a separate volume applies to other directories that contain data that should be preserved"? There is a temptation to apply this to the whole /var directory, but that would be wrong. If a system is being reverted to its old snapshot, a package database (which is in /var/lib/pacman) should be reverted, too. But /var/lib/pacman is under /var.

The conclusion is that Linux plumbers should think a bit about this "revert the whole system" use case, and maybe move some directories.

Sunday, December 20, 2015

Ready to drop Gentoo

I was a Gentoo user since 2010. For me, it was, at that time, a source of fresh, well-maintained packages, without the multimedia related US-lawyer-induced brain damage that plagued Debian. Also, by compiling the packages on my local PC, it neatly sidestepped legal problems related to redistribution of GPL-ed packages with GPL-incompatible dependencies, and trademark issues related to Mozilla products. Also, it offered enough choice in the form of USE flags to sidestep too-raw technologies.

Today, I am re-evaluating this decision. I still care about perfect multimedia support, even if relies on technologies that are illegal in some country (even if that country is my own). I still care about Firefox identifying itself as Firefox in the User-Agent header, as to avoid broken sites (such as https://room.co/), but I don't want to use binaries from Mozilla, because they rely on outdated technology (i.e. are appropriate to something like RHEL 5). And, obviously, I care about modern and bug-free packages, or at least about non-upstream bugs (and, ideally, upstream bugs, too) being fixed promptly.

Also, I rely on a feature that is not found upstream in any desktop environment anymore: full-screen color correction, even in games. Yes, I have a colorimeter.

This was necessary with my old Sony VAIO Z23A4R laptop, because it had a wide-gamut screen (94% coverage of Adobe RGB) and produced very oversaturated colors by default. This is also necessary on my new laptop, Lenovo Ideapad Yoga 2 Pro, because otherwise it is very hard to convince it to display the yellow color. Contrary to popular claims, it can display yellow, even in Linux, given the exact RGB values, but even slight changes (that would only produce a slightly different shade of yellow on normal screens) cause it to display either yellowish-red or yellowish-green color.

So, it must be easy for me to install extra packages (such as CompICC) from source, and, ideally, have them integrated into package management. And, the less the number of such extra packages needed for full-screen color correction, the better.

Now back to Gentoo. It still allows me to ignore lawyers, too-radical Free Software proponents, and their crippling effect on the software that I want to use. It, mostly, still allows me to take suspicious too-new infrastructure out of the equation. For full-screen color correction, I need exactly one ebuild that is not in the main Portage tree (CompICC). But other packages started to suffer from bitrot.

Problem 1: MATE desktop environment stuck at version 1.8. Probably just due to lack of manpower to review the updates. This is bug 551588.
Problem 2: Attempt to upgrade GNOME to version 3.18 brought in a lot of C++11 related breakage that wasn't handled promptly enough, e.g., by reverting the upgrade. This is bug 566328.
Problem 3: QEMU will not let Windows 8 guests to use resolutions higher than 1024x768. Upstream QEMU does not have this bug - it is an invention of overzealous unbundling that replaced a perfectly working bundled version of VGA BIOS with an inferior copy of Bochs VGA BIOS. This is bug 529862.

I don't yet know which Linux distribution I will use. Maybe Arch (but it requires so much stuff from AUR to build CompICC! maybe I should use Compiz-CMS instead), maybe something else. We'll see.

Sunday, October 18, 2015

Still using icims.com for recruiting? Think again!

If your company has open vacancies and uses some system for pre-screening candidates (e.g. by giving them questions), I have a "small" task for you. Go to your system, answer the questions as if you were a candidate, validate the answers as you would expect from a candidate (e.g. actually perform the actions that the answer describes), and then save the results. Look at the whole process. Make a conclusion for yourself whether your system is usable for the stated purpose. Communicate it to your management, if needed.

If you are using icims.com for hiring technical candidates, the answer is most probably "not suitable at all".

The most annoying bug that icims.com has is that it does not allow the candidate to enter certain characters in certain positions. The exact error message is:
Q3 2 Contains invalid characters. You cannot use the characters: ' " \ / or ` in an enclosing instance of <>, <<, >> or ><.
 This triggers at least on the following types of input:
  • XML or HTML
  • Command redirections, e.g.: echo "foo bar" >> baz.txt
  • Sequences of menu items to click, e.g.: "File > New > Folder", if a bad character happens to be before that
So, you cannot ask questions about HTML, shell scripting, or even general questions about using GUI-based applications.

This error message probably means that they are concerned about XSS attacks. However, filtering out invalid characters is a very sloppy way of protection against such attacks. And it imposes completely unreasonable restrictions on the user input.

In fact, any kind of input (including XML, bash scripts or text about clicking the menu) should be suitable, and can be made to display safely and properly in any browser, just by escaping the special characters when generating the HTML page. Many template engines exist that do this escaping for you automatically. Today, there is simply no reason not to use them.

If a candidate sees such error, he/she becomes demotivated. It is a stupid barrier before getting the correct answer to you. It also indicates that you don't care about your customers (by choosing business partners that allow such sloppy practices). Worse, some of your candidates (who see icims.com for the first time) can think that it is your product, or your internal system, and that you (not icims.com) have web developers with insufficient skills. I.e. that your company is not good enough to work in, because you don't weed out underqualified workers.

You don't want to lose candidates. So you don't want to use icims.com. Really.

Monday, September 15, 2014

Why static analyzers should see all the code

Just for fun, I decided to run a new "standard markdown" C code through a static analyzer provided by the Clang project. On the surface, this looks very easy:


CCC_CC=clang scan-build make stmd

It even finds bugs. A lot of dead assignments, and some logic & memory errors: dereferencing a null pointer, memory leaks and a double-free. However, are they real?

E.g., it complains that the following piece of code in src/bstrlib.c introduces a possible leak of memory pointed by buff which was previously allocated in the same function:


bdestroy (buff);
return ret;

It does not understand that bdestroy is a memory deallocation function. Indeed, it could be anything. It could be defined in a different file. It indeed does not destroy the buffer and thus leaks the memory if some integrity error occurs (and the return code is never checked).

So indeed, the code of bdestroy smells somewhat. But is it a problem? How can we trick clang into understanding that this can't happen?

Part of the problem stems from the fact that clang looks at one file at a time and thus does not understand dependencies between functions defined in different files. There is, however, a way to fix it.

All we need to do is to create a C source file that includes all other C source files. Let's call it "all.c".


#include "blocks.c"
#include "bstrlib.c"
#include "detab.c"
#include "html.c"
#include "inlines.c"
#include "main.c"
#include "print.c"
#include "scanners.c"
#include "utf8.c"

Unfortunately, it does not compile out of the box, because of the conflicting "advance" macros in inlines.c and utf8.c (fixable by undefining these macros at the end of each file), and because of the missing header guard around stmd.h (fixable trivially by adding it). With that, one can submit this all-inclusive file to the static analyzer:


scan-build clang -g -O3 -Wall -std=c99 -c -o src/all.o src/all.c

Result: no bugs found, except dead assignments.


Saturday, May 17, 2014

Antispam misconfigurations

Introduction

This blog post is about ensuring correct operation of one particular antispam solution. However, I think that the thoughts about possible misconfigurations expressed here apply to most of them.

The following combination of mail-related software is quite popular: Postfix + DSPAM + Dovecot. Each of these products comes with an extensive user manual, and packages are available for almost every linux distribution. So, I decided to use it for the company mail. In fact, Postfix and Dovecot were already installed (with all users being virtual), and it only remained to install DSPAM, because spam became a problem for some users.

Here is what kinds of non-spam messages go through our server: business mail (invoices, documents, commercial offers), technical support, discussions within the team, bugtracker tickets, automated notifications (e.g. when contracts are about to expire).

There are many manuals on setting up DSPAM together with Postfix and Dovecot. Below are the common things mentioned in them.

Postfix should pass the incoming mail into DSPAM. The preferred protocol for doing this is LMTP over a unix-domain socket. DSPAM should add X-DSPAM-* headers to them and reinject into Postfix. Then Postfix should contact Dovecot via LMTP, and then the message finally gets delivered to the user's mailbox (or the spam folder, with the help of a sieve filter). If DSPAM makes a mistake, the user can move the message appropriately via IMAP, and the dovecot-antispam plugin will train DSPAM about this incident.

So far so good. I installed DSPAM (with a simple hash driver backend) and configured the rest of mail-related software to use it. It even appeared to work for me after initial training. But then, we encountered problems, not explicitly mentioned in the manuals, described below. If you are reading this post, please test your mail servers for them, too.

Training did not work for some users

Some users, including myself, used their full e-mail (including the company domain) as their IMAP username, and some didn't include the domain part. Both setups worked for sending and receiving mail. However, in the initial configuration, the user's login was passed to dspam-train as-is:

antispam_dspam_args = --deliver=;--client;--user;%u

Result: for some users (those who didn't append the domain to their IMAP username), the retraining process looked for the hash file in /var/spool/dspam/data/local, while that hash file is always in /var/spool/dspam/data/ourdomain.ru. The fix is to spell the domain explicitly:

antispam_dspam_args = --deliver=;--client;--user;%n@ourdomain.ru

In fact, I think that any use of %u in Dovecot configuration is wrong if you have only one domain on the mail server.

Duplicate e-mail from monitoring scripts

Monitoring scripts send e-mail to root@ourdomain.ru from other hosts if something bad happens. However, after configuring DSPAM, each of such messages arrived twice to my mailbox. This happened because the "root" alias is expanded recursively (this is OK, as root is virtual and has nothing to do with uid 0). We want to archive all root mail for easy reference, as well as to deliver it to the actual sysadmins. The alias expansion happened twice: once before DSPAM and once after it. The solution is to disable it once. I disabled it before DSPAM:

smtp      inet  n       -       n       -       -       smtpd
  -o content_filter=lmtp:unix:/var/run/dspam/dspam.sock
  -o receive_override_options=no_address_mappings

However, this was a mistake.

Training still did not work for sales

The sales team complained that they were not able to train DSPAM so that the incoming commercial queries end up in their inbox, and not in the spam folder. Manual training didn't help, either. This appeared to be a variation of the first problem: wrong path to the hash file.

The sales team has a "sales" mail alias that expands to all of them. As such, due to the previous "fix", Postfix told DSPAM that the mail is addressed to sales@ourdomain.ru:

smtp      inet  n       -       n       -       -       smtpd
  -o content_filter=lmtp:unix:/var/run/dspam/dspam.sock
  -o receive_override_options=no_address_mappings

Thus, DSPAM placed the hash file in /var/spool/dspam/data/ourdomain.ru/sales, while the training process looked in /var/spool/dspam/data/ourdomain.ru/$person. The solution was to move the no_address_mappings option after DSPAM, i.e.  the reinjection service. This way, both DSPAM and the dovecot-antispam plugin see the expanded recepient addresses.

Some e-mail from new team members was marked as spam

A general expectation is that authenticated e-mail from one user to the other user on the same corporate mail server is not spam. However, the new team members (and even some old ones) misconfigured their e-mail clients to use port 25 (with STARTSSL and authentication) for outgoing e-mail. As such, all their outgoing e-mail was processed by DSPAM, because the only factor that decides whether to process the e-mail is the port. The solution was to educate everyone on the team to use port 587 for outgoing e-mail, which is not configured to process messages with DSPAM. Also it would have been nice to make authentication always fail on port 25, but I didn't do this yet.

Outgoing e-mail was sometimes marked as spam

The general expectation is that outgoing mail should never be marked as spam, even if it is spam. If you disagree, then please note that there is nobody to notice the problem, and nobody except root can retrain the spam filter in such case.

This is mostly a duplicate of the previous item, with an interesting twist. Namely, there are some web scripts and cron jobs that send mail to external users, and both connect to 127.0.0.1:25 without authentication. I solved this by splitting the default smtp line in master.cf into two: one for 127.0.0.1:smtp, and one for my external IP address. Spam filtering is enabled only for the second line.

Conclusion


It works! Or at least pretends to work. With so many pitfalls already seen, I cannot be sure.