ubercoder

Cross-posts from GitHub Gists: Ceph RGW StaticSites documentation, and S3 Boto magic

Cross-posting from where I've written up some other pieces:
- How to set up Ceph RGW StaticSites (S3 Website mode). I wrote the code over the course of the last year, and here's the first solid documentation for setting it up now. As for 'using' it, your S3 client with WebsiteConfiguration support should just work.
- Boto S3: how to muck with where it actually connects. Boto S3 tries to be smart about where it's connecting to, such that it takes the hostname you give it and uses that for most things. This makes some testing fun where you want it to request a certain hostname but actually connect somewhere entirely different.

Flattr this
ubercoder

LVM: convert linear to striped

This requires temporarily having 2x the size of your LVM volume. You need to create a mirror of your data, with the new leg of the mirror striped over the target disks, then drop the old leg of the mirror that was not striped. If you want to stripe over ALL of your disks (including the one that was already used), you also need to specify --alloc anywhere otherwise the mirror code will refuse to use any disk twice.
# convert to a mirror (-m1), with new leg striped over 4 disks: /dev/sdb, /dev/sdc, /dev/sdd, /dev/sde
# --mirrorlog core - use in-memory status during the conversion
# --interval 1: print status every second
lvconvert --interval 1 -m1 $myvg/$mylv --mirrorlog core --type mirror --stripes 4 /dev/sd{b,c,d,e}
# drop the old leg, /dev/sda
lvconvert --interval 1 -m0 $myvg/$mylv  /dev/sda
ubercoder

Ceph RGW Hammer->Jewel upgrade: adding realms, periods etc

Some quick notes on upgrading a Hammer-era Ceph RGW setup to Jewel, because the upstream notes don't cover it well. The multisite docs are the closest there is, but here's what I put together instead.

  • The Zone concept has remained the same.
  • A Region is now a Zonegroup.
  • The top-level RegionMap is moved inside the content of a Period
  • Only one Period can be live at a time, and changes are made to a non-live Period
  • The Realm describes which Period is live.
  • Additionally, there can be a default Zonegroup and Zone inside the period, as well as a default Zone inside a Zonegroup.

Initial state, if you were to look on Hammer:
# radosgw-admin region list
{
  "default_info": {
    "default_region": "default"
  },
  "regions": [
    "default"
  ]
}
# radosgw-admin region-map get
{
  "master_region": "default",
  "bucket_quota": {
    "max_objects": -1,
    "enabled": false,
    "max_size_kb": -1
  },
  "user_quota": {
    "max_objects": -1,
    "enabled": false,
    "max_size_kb": -1
  },
  "regions": [
    {
      "val": {
        "zones": [
          {
            "name": "default",
            "log_meta": "false",
            "endpoints": [

            ],
            "bucket_index_max_shards": 31,
            "log_data": "false"
          }
        ],
        "name": "default",
        "endpoints": [
	      "https://CENSORED-1.EXAMPLE.COM",
	      "https://CENSORED-2.EXAMPLE.COM"
        ],
        "api_name": "CENSORED",
        "default_placement": "default-placement",
        "is_master": "true",
        "hostnames": [
	      "CENSORED-1.EXAMPLE.COM",
	      "CENSORED-2.EXAMPLE.COM"
        ],
        "placement_targets": [
          {
            "name": "default-placement",
            "tags": [

            ]
          }
        ],
        "master_zone": ""
      },
      "key": "default"
    }
  ]
}
# radosgw-admin region get --rgw-region=default
{
  "zones": [
    {
      "log_meta": "false",
      "name": "default",
      "bucket_index_max_shards": 31,
      "endpoints": [

      ],
      "log_data": "false"
    }
  ],
  "master_zone": "",
  "is_master": "true",
  "placement_targets": [
    {
      "name": "default-placement",
      "tags": [

      ]
    }
  ],
  "default_placement": "default-placement",
  "name": "default",
  "hostnames": [
	"CENSORED-1.EXAMPLE.COM",
	"CENSORED-2.EXAMPLE.COM"
  ],
  "endpoints": [
    "https://CENSORED-1.EXAMPLE.COM",
    "https://CENSORED-2.EXAMPLE.COM"
  ],
  "api_name": "CENSORED"
}
# radosgw-admin zone get --rgw-region=default --rgw-zone=default
{
  "log_pool": ".log",
  "user_swift_pool": ".users.swift",
  "placement_pools": [
    {
      "val": {
        "data_pool": ".rgw.buckets",
        "data_extra_pool": ".rgw.buckets.extra",
        "index_pool": ".rgw.buckets.index"
      },
      "key": "default-placement"
    }
  ],
  "user_keys_pool": ".users",
  "control_pool": ".rgw.control",
  "domain_root": ".rgw",
  "usage_log_pool": ".usage",
  "gc_pool": ".rgw.gc",
  "system_key": {
    "access_key": "",
    "secret_key": ""
  },
  "intent_log_pool": ".intent-log",
  "user_uid_pool": ".users.uid",
  "user_email_pool": ".users.email"
}


Initial state, if you were to look on Jewel:
# radosgw-admin zone list
{
    "default_info": "",
    "zones": [
        "default"
    ]
}
# radosgw-admin zonegroup list
{
    "default_info": "",
    "zonegroups": [
        "default"
    ]
}
# TODO: fill the rest of this up.

# Now changing stuff up:
# export SYSTEM_ACCESS_KEY=... SYSTEM_SECRET_KEY=...
# radosgw-admin user create \
  --system  
  --uid=zone.user \
  --display-name="Zone User" \
  --access-key=$SYSTEM_ACCESS_KEY \
  --secret=$SYSTEM_SECRET_KEY
{
  "user_id": "zone.user",
  "display_name": "Zone User",
  "email": "",
  "suspended": 0,
  "max_buckets": 1000,
  "auid": 0,
  "subusers": [],
  "keys": [
      {
          "user": "zone.user",
          "access_key": "...",
          "secret_key": "..."
      }
  ],
  "swift_keys": [],
  "caps": [],
  "op_mask": "read, write, delete",
  "system": "true",
  "default_placement": "",
  "placement_tags": [],
  "bucket_quota": {
      "enabled": false,
      "max_size_kb": -1,
      "max_objects": -1
  },
  "user_quota": {
      "enabled": false,
      "max_size_kb": -1,
      "max_objects": -1
  },
  "temp_url_keys": []
}


# radosgw-admin realm create --rgw-realm gold
{
    "id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103",
    "name": "gold",
    "current_period": "f8fafae9-b6d2-41f6-b7aa-7b03fea57bfb",
    "epoch": 1
}


# radosgw-admin realm list
{
    "default_info": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103",
    "realms": [
        "gold"
    ]
}


# radosgw-admin realm get
{
    "id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103",
    "name": "gold",
    "current_period": "f8fafae9-b6d2-41f6-b7aa-7b03fea57bfb",
    "epoch": 1
}


# radosgw-admin period list
{
    "periods": [
        "f8fafae9-b6d2-41f6-b7aa-7b03fea57bfb"
    ]
}


# radosgw-admin period get
{
    "id": "f8fafae9-b6d2-41f6-b7aa-7b03fea57bfb",
    "epoch": 1,
    "predecessor_uuid": "",
    "sync_status": [],
    "period_map": {
        "id": "f8fafae9-b6d2-41f6-b7aa-7b03fea57bfb",
        "zonegroups": [],
        "short_zone_ids": []
    },
    "master_zonegroup": "",
    "master_zone": "",
    "period_config": {
        "bucket_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        },
        "user_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        }
    },
    "realm_id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103",
    "realm_name": "gold",
    "realm_epoch": 1
}


# radosgw-admin period update --master-zone=default --master-zonegroup=default
{
    "id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103:staging",
    "epoch": 1,
    "predecessor_uuid": "f8fafae9-b6d2-41f6-b7aa-7b03fea57bfb",
    "sync_status": [],
    "period_map": {
        "id": "f8fafae9-b6d2-41f6-b7aa-7b03fea57bfb",
        "zonegroups": [],
        "short_zone_ids": []
    },
    "master_zonegroup": "",
    "master_zone": "",
    "period_config": {
        "bucket_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        },
        "user_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        }
    },
    "realm_id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103",
    "realm_name": "gold",
    "realm_epoch": 2
}


# radosgw-admin period prepare
{
    "id": "8fb1cfbc-ad63-4d92-886a-d939cc52862b",
    "epoch": 1,
    "predecessor_uuid": "",
    "sync_status": [],
    "period_map": {
        "id": "8fb1cfbc-ad63-4d92-886a-d939cc52862b",
        "zonegroups": [],
        "short_zone_ids": []
    },
    "master_zonegroup": "",
    "master_zone": "",
    "period_config": {
        "bucket_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        },
        "user_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        }
    },
    "realm_id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103",
    "realm_name": "gold",
    "realm_epoch": 1
}

# radosgw-admin zone get --rgw-zonegroup=default --rgw-zone=default >zone.json
# radosgw-admin zonegroup get --rgw-zonegroup=default --rgw-zone=default >zonegroup.json
# $EDITOR zonegroup.json zone.json
## Add the following data:
## both files: Set realm_id
## zone.json: Set system_user.access_key, Set system_user.secret_key
## zonegroup.json: Set master_zone to "default", Set is_master to "true".
# radosgw-admin zone set --rgw-zone=default --rgw-zonegroup=default \
  --realm-id=1ac4fd8d-9e77-4fd2-ad54-b591f1734103 \
  --infile zone.json \
  --master --default
# radosgw-admin zonegroup set --rgw-zonegroup=default \
  --realm-id=1ac4fd8d-9e77-4fd2-ad54-b591f1734103 \
  --infile zonegroup.json \
  --master --default


# radosgw-admin period update
{
    "id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103:staging",
    "epoch": 1,
    "predecessor_uuid": "f8fafae9-b6d2-41f6-b7aa-7b03fea57bfb",
    "sync_status": [],
    "period_map": {
        "id": "f8fafae9-b6d2-41f6-b7aa-7b03fea57bfb",
        "zonegroups": [
            {
                "id": "default",
                "name": "default",
                "api_name": "CENSORED",
                "is_master": "true",
                "endpoints": [
                    "https:\/\/CENSORED-1.EXAMPLE.COM",
                    "https:\/\/CENSORED-2.EXAMPLE.COM"
                ],
                "hostnames": [
                    "CENSORED-1.EXAMPLE.COM",
                    "CENSORED-2.EXAMPLE.COM"
                ],
                "hostnames_s3website": [],
                "master_zone": "default",
                "zones": [
                    {
                        "id": "default",
                        "name": "default",
                        "endpoints": [],
                        "log_meta": "true",
                        "log_data": "false",
                        "bucket_index_max_shards": 31,
                        "read_only": "false"
                    }
                ],
                "placement_targets": [
                    {
                        "name": "default-placement",
                        "tags": []
                    }
                ],
                "default_placement": "default-placement",
                "realm_id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103"
            }
        ],
        "short_zone_ids": [
            {
                "key": "default",
                "val": 2610307010
            }
        ]
    },
    "master_zonegroup": "default",
    "master_zone": "default",
    "period_config": {
        "bucket_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        },
        "user_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        }
    },
    "realm_id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103",
    "realm_name": "gold",
    "realm_epoch": 2
}


# radosgw-admin period commit
2016-08-16 17:51:22.324368 7f8562da6900  0 error read_lastest_epoch .rgw.root:periods.8d0d4955-592c-48b5-93d1-3fa1cec17579.latest_epoch
2016-08-16 17:51:22.347375 7f8562da6900  1 Set the period's master zonegroup default as the default
{
    "id": "8d0d4955-592c-48b5-93d1-3fa1cec17579",
    "epoch": 1,
    "predecessor_uuid": "f8fafae9-b6d2-41f6-b7aa-7b03fea57bfb",
    "sync_status": [
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        ""
    ],
    "period_map": {
        "id": "8d0d4955-592c-48b5-93d1-3fa1cec17579",
        "zonegroups": [
            {
                "id": "default",
                "name": "default",
                "api_name": "CENSORED",
                "is_master": "true",
                "endpoints": [
                    "https:\/\/CENSORED-1.EXAMPLE.COM",
                    "https:\/\/CENSORED-2.EXAMPLE.COM"
                ],
                "hostnames": [
                    "CENSORED-1.EXAMPLE.COM",
                    "CENSORED-2.EXAMPLE.COM"
                ],
                "hostnames_s3website": [],
                "master_zone": "default",
                "zones": [
                    {
                        "id": "default",
                        "name": "default",
                        "endpoints": [],
                        "log_meta": "true",
                        "log_data": "false",
                        "bucket_index_max_shards": 31,
                        "read_only": "false"
                    }
                ],
                "placement_targets": [
                    {
                        "name": "default-placement",
                        "tags": []
                    }
                ],
                "default_placement": "default-placement",
                "realm_id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103"
            }
        ],
        "short_zone_ids": [
            {
                "key": "default",
                "val": 2610307010
            }
        ]
    },
    "master_zonegroup": "default",
    "master_zone": "default",
    "period_config": {
        "bucket_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        },
        "user_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        }
    },
    "realm_id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103",
    "realm_name": "gold",
    "realm_epoch": 2
}


ubercoder

Ceph: RBD resizing workarounds, Python API example

Some mis-reading the help for 'rbd resize' recently lead somebody to making an RBD volume extremely large, if the units are bytes, this command makes sense:
rbd resize --size $((155*1024*1024*1024)) $RBD_VOL_NAME
But, as it happens, the units are actually in MiB.

# rbd info $RBD_VOL_NAME
rbd image '$RBD_VOL_NAME':
 size 155 PB in 41607495680 objects
 ...
Oops, that's a bit too big, and I doubt your cluster has that much space for a single volume, even if it's sparsely allocated.

There is the --allow-shrink option to rbd resize, however on a volume this large, at least in the Hammer release, it will pretty much never return (it eventually times out).

Future work for RBD resize might include a much more intelligent resize command, that checks the highest non-null block, perhaps a reverse iterator looking for keys named at most the size of the device.
Making resize ask for confirmation with output of the new size, or have a dry-run option would also be helpful to avoid this problem.

So, as a workaround, I give you the following useful script, that manually changes the size omap field directly. It is wildly unsafe if you are already using anything past the target size, but it's fine otherwise.
You need to ensure nothing is using the volume when you apply this...

# Prerequisite: From Infernalis source or newer, src/pybind/rbd.py, src/pybind/rados.py 
# Make sure you have this fix: https://github.com/ceph/ceph/pull/6220
from rados import Rados, ReadOpCtx, WriteOpCtx
import rbd
import struct
import ctypes
  
RBD_NAME = '...'
NEWSIZE = 155 * 1024 * 1024 * 1024 # 155 GiB

with Rados(conffile='/etc/ceph/ceph.conf') as r:
  print 'RADOS object', r
  print 'RADOS version', r.version()
  print 'Cluster Stats', r.get_cluster_stats()
  
  with r.open_ioctx('rbd') as ioctx:
    with rbd.Image(ioctx, RBD_NAME, read_only=True) as img:
      imgstat = img.stat()
      print imgstat
      header_name = imgstat['block_name_prefix'].replace('rbd_data.', 'rbd_header.', 1)

    with ReadOpCtx(ioctx) as read_op:
      kv_it = ioctx.get_omap_vals(read_op, "", "", 500)
      ioctx.operate_read_op(read_op, header_name)
      kv = dict(kv_it[0])
      print 'OMAP of', header_name, kv
  
    newkeys = ('size', )
    newvals = (struct.pack('<Q', NEWSIZE), ) # unsigned long long, little endian
    with WriteOpCtx(ioctx) as write_op:
      ioctx.set_omap(write_op, newkeys, newvals)
      ioctx.operate_write_op(write_op, header_name)
  ###  
    with rbd.Image(ioctx, RBD_NAME, read_only=True) as img:
      imgstat = img.stat()
      print imgstat

# END OF SCRIPT
ubercoder

gnupg-2.1 mutt

For the mutt users with GnuPG, depending on your configuration, you might notice that mutt's handling of GnuPG mail stopped working with GnuPG. There were a few specific cases that would have caused this, which I'll detail, but if you just want it to work again, put the below into your Muttrc, and make the tweak to gpg-agent.conf. The underlying cause for most if it is that secret key operations have moved to the agent, and many Mutt users used the agent-less mode, because Mutt handled the passphrase nicely on it's own.

  • -u must now come BEFORE --cleansign
  • Add allow-loopback-pinentry to gpg-agent.conf, and restart the agent
  • The below config adds --pinentry-mode loopback before --passphrase-fd 0, so that GnuPG (and the agent) will accept it from Mutt still.
  • --verbose is optional, depending what you're doing, you might find --no-verbose cleaner.
  • --trust-model always is a personal preference for my Mutt mail usage, because I do try and curate my keyring
set pgp_autosign = yes
set pgp_use_gpg_agent = no
set pgp_timeout = 600
set pgp_sign_as="(your key here)"
set pgp_ignore_subkeys = no

set pgp_decode_command="gpg %?p?--pinentry-mode loopback  --passphrase-fd 0? --verbose --no-auto-check-trustdb --batch --output - %f"
set pgp_verify_command="gpg --pinentry-mode loopback --verbose --batch --output - --no-auto-check-trustdb --verify %s %f"
set pgp_decrypt_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --output - %f"
set pgp_sign_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --output - --armor --textmode %?a?-u %a? --detach-sign %f"
set pgp_clearsign_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --output - --armor --textmode %?a?-u %a? --detach-sign %f"
set pgp_encrypt_sign_command="pgpewrap gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --textmode --trust-model always --output - %?a?-u %a? --armor --encrypt --sign --armor -- -r %r -- %f"
set pgp_encrypt_only_command="pgpewrap gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --trust-model always --output --output - --encrypt --textmode --armor -- -r %r -- %f"
set pgp_import_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --import -v %f"
set pgp_export_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --export --armor %r"
set pgp_verify_key_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --fingerprint --check-sigs %r"
set pgp_list_pubring_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --with-colons --list-keys %r"
set pgp_list_secring_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --with-colons --list-secret-keys %r"

This entry was originally posted at http://robbat2.dreamwidth.org/238770.html. Please comment there using OpenID.
ubercoder

Mail Bounces & Gmail/GApps users: The ugly truth of DMARC in open-source mailing lists

This is a slightly edited copy of an email I send to the mailing lists for my local hackspace, VHS. I run their mailing lists presently for historical reasons, but we're working on migrating them slowly.


Hi all,

Speaking as your email list administrator here. I've tried to keep the logs below as intact as possible, I've censored only one user's domain as being identifying information explicitly, and then two other recipient addresses.

There have been a lot of reports lately of bounce notices from the list, and users have correctly contacted me, wondering what's going on. The bounce messages are seen primarily by users on Gmail and hosted Google Apps, but the problems do ultimately affect everybody.

67.6% of the vhs-general list uses either gmail or google apps (347 subs of 513). For the vhs-members list it's 68.3% (both of these stats created by checking if the MX record for the user's domain points to Google).

Google deciding that a certain list message is too much like spam, because of two things:

  • because of content
  • because of DMARC policy

Content:

We CAN do something about the content.

Please don't send email that has one or twos, containing a URL and a short line of text. It's really suspicious and spam-like.

Include a better description (two or three lines) with the URL.

This gets an entry in the mailserver logs like:

delivery 47198: failure:
+173.194.79.26_failed_after_I_sent_the_message./Remote_host_said:_550-5.7.1_[66.196.40.251______12]_Our_system_has_detected_that_this_message_is/550-5.7.1_likely_unsolicited_mail._To_reduce_the_amount_of_spam_sent_to_Gmail,/550-5.7.1_this_message_has_been_blocked._Please_visit/550-5.7.1_http://support.google.com/m
+ail/bin/answer.py?hl=en&answer=188131_for/550_5.7.1_more_information._mu18si1139639pab.287_-_gsmtp/

That was triggered by this email earlier in the month:

> Subject: Kano OS for RasPi
> http://kano.me/downloads
> Apparently it's faster than Rasbian

DMARC policy:

TL;DR: If you work on an open-source mailing list app, please implement DMARC support ASAP!

Google and other big mail hosters have been working on an anti-spam measure called DMARC [1].

Unlike many prior attempts, it latches onto the From header as well as the SMTP envelope sender, and this unfortunately interferes with mailing lists [2], [3].

I do applaud the concept behind DMARC, but the rollout seems to be hurting lots of the small guys.

At least person (Eric Sachs) at Google is aware of this [4]. There is no useful workaround that I can enact as a list admin right now, other than asking the one present user to tweak his mailserver if possible.

There is also no completed open source support I can find for DMARC. Per the Google post above, the Mailman project is working on it [5], [6], but it's not yet available as of the last release. Our lists run on ezmlm-idx, and I run some other very large lists using mlmmj (gentoo.org) and sympa; none of them have DMARC support.

The problem is only triggering with a few conditions so far:

  • Recpient is on a mail service that implements DMARC (and DKIM and SPF)
  • Sender is on a domain that has a DMARC policy of reject

Of the 115 unique domains used by subscribers on this list, here are all the DMARC policies:

_dmarc.gmail.com.       600  IN TXT "v=DMARC1\; p=none\; rua=mailto:mailauth-reports@google.com"
_dmarc.USERDOMAIN.ca.   7200 IN TXT "v=DMARC1\; p=reject\; rua=mailto:azrxfkte@ag.dmarcian.com\; ruf=mailto:azrxfkte@fr.dmarcian.com\; adkim=s\; aspf=s"
_dmarc.icloud.com.      3600 IN TXT "v=DMARC1\; p=none\; rua=mailto:dmarc_agg@auth.returnpath.net, mailto:d@rua.agari.com\; ruf=mailto:d@ruf.agari.com, mailto:dmarc_afrf@auth.returnpath.net\;rf=afrf\;pct=100"
_dmarc.mac.com.         3600 IN TXT "v=DMARC1\; p=none\; rua=mailto:d@rua.agari.com\; ruf=mailto:d@ruf.agari.com\;"
_dmarc.me.com.          3600 IN TXT "v=DMARC1\; p=none\; rua=mailto:d@rua.agari.com\; ruf=mailto:d@ruf.agari.com\;"
_dmarc.yahoo.ca.        7200 IN TXT "v=DMARC1\; p=none\; pct=100\; rua=mailto:dmarc-yahoo-rua@yahoo-inc.com\;"
_dmarc.yahoo.com.       1800 IN TXT "v=DMARC1\; p=none\; pct=100\; rua=mailto:dmarc-yahoo-rua@yahoo-inc.com\;"
_dmarc.yahoo.co.uk.     1800 IN TXT "v=DMARC1\; p=none\; pct=100\; rua=mailto:dmarc-yahoo-rua@yahoo-inc.com\;"

Only one of those includes a reject policy, but I suspect it's a matter of time until more of them will include it. I'm going to use USERDOMAIN.ca here as the rest of the example, and that user is indirectly responsible for lots of the rejects we are seeing.

Step 1.

User sends this email.

From: A User <someuser@userdomain.ca>
To: vhs-general@lists.hackspace.ca

Delivered to list server via SMTP (these two addresses form the SMTP envelope)

MAIL FROM:<someuser@userdomain.ca>
RCPT TO:<vhs-general@lists.hackspace.ca>

Step 2.

If the MAIL-FROM envelope address is on the list of list subscribers, your message is accepted.

Step 3.0.

The list adjusts the mail to outgoing, and uses SMTP VERP [7] to get the mail server to send the new message. This means it hands off a single copy of the email, as well as a list of all recipients for the mail. Envelope from address in this case will encode the name of the list and the number of the mail in the archive.

If it was delivering to me (robbat2@orbis-terrarum.net), the outgoing SMTP connection would look roughly like:

MAIL FROM:<vhs-general-return-18094-robbat2=orbis-terrarum.net@lists.hackspace.ca>
RCPT TO:<robbat2@orbis-terrarum.net>

And the mail itself still looks like:

From: A User <someuser@userdomain.ca>
To: vhs-general@lists.hackspace.ca

Step 3.1.

I got this email, and if I open it I see this telling me about the SMTP details:

Return-Path: <vhs-general-return-18094-robbat2=orbis-terrarum.net@lists.hackspace.ca>

I don't implement DMARC on my domain. If my system bounced the email, it would have gone to that address, and the list app would know that message 18094 on list vhs-general bounced to user robbat2@orbis-terrarum.net.

Step 3.2.

Google DOES implement DMARC, so lets run through that.

The key part of DMARC is that it takes the domain from the From header.

_dmarc.USERDOMAIN.ca.   7200 IN TXT "v=DMARC1\; p=reject\; rua=mailto:azrxfkte@ag.dmarcian.com\; ruf=mailto:azrxfkte@fr.dmarcian.com\; adkim=s\; aspf=s"

The relevant parts to us are:

p=reject, aspf=s

The ASPF section applies strict mode, and says the mail with a From header of someuser@USERDOMAIN.ca, must have an exact match of the MAIL FROM transaction of @USERDOMAIN.ca.

It doesn't match, as the list changed the MAIL FROM address. The p=reject says to reject the mail if this happens.

This runs counter to the design principles of mailing lists, so DMARC has a bunch of options, all of which require changing the mail in some way.

Here's the logs from the above failure:

> 2014-03-19 11:19:50.783996500 new msg 98907
> 2014-03-19 11:19:50.783998500 info msg 98907: bytes 8864 from <vhs-general-return-18094-@lists.hackspace.ca-@[]> qp 32511 uid 89
> 2014-03-19 11:19:50.785359500 starting delivery 211352: msg 98907 to remote user1@gappsdomain.com
> 2014-03-19 11:19:50.785385500 status: local 1/10 remote 1/40
> 2014-03-19 11:19:50.785450500 starting delivery 211353: msg 98907 to remote user2@gmail.com
> ...
> 2014-03-19 11:19:58.713558500 delivery 211352: failure:
+74.125.25.27_failed_after_I_sent_the_message./Remote_host_said:_550-5.7.1_Unauthenticated_email_from_USERDOMAIN.ca_is_not_accepted_due_to_domain's/550-5.7.1_DMARC_policy._Please_contact_administrator_of_USERDOMAIN.ca_domain_if/550-5.7.1_this_was_a_legitimate_mail._Please_visit/550-5.7.1__http://support.google.com
+/mail/answer/2451690_to_learn_about_DMARC/550_5.7.1_initiative._ub8si9386628pac.133_-_gsmtp/
> 2014-03-19 11:19:59.053816500 delivery 211353: failure:
+173.194.79.26_failed_after_I_sent_the_message./Remote_host_said:_550-5.7.1_Unauthenticated_email_from_USERDOMAIN.ca_is_not_accepted_due_to_domain's/550-5.7.1_DMARC_policy._Please_contact_administrator_of_USERDOMAIN.ca_domain_if/550-5.7.1_this_was_a_legitimate_mail._Please_visit/550-5.7.1__http://support.google.co
+m/mail/answer/2451690_to_learn_about_DMARC/550_5.7.1_initiative._my2si9389106pab.76_-_gsmtp/

[1] http://dmarc.org/
[2] http://dmarc.org/faq.html#s_3
[3] http://dmarc.org/faq.html#r_2
[4] https://sites.google.com/site/oauthgoog/mlistsdkim
[5] http://www.marshut.com/qskkv/adding-dmarc-support-for-mailman-3.html
[6] https://code.launchpad.net/~jimpop/mailman/dmarc-reject
[7] http://en.wikipedia.org/wiki/Variable_envelope_return_path

ubercoder

Terrible PHP hacks: making PHP/FI style file uploads work in PHP5.5 and newer

One of my past consulting customers, came to me with a problem. He'd been relatively diligent in upgrading his servers since last I spoke (it had been some years), and now the admin panel on one of his client's very old PHP websites was no longer working.

I knew the code had some roots back to at least PHP3, at the file headers I'd previously seen had copyright dates back to 1999. Little did I know, I was in for a treat today.

When last I visited this codebase, due to it's terrible nature with hundreds of globals, I had to put some hacks in for PHP 5.4, since register_globals were no longer an option. The hack for this is quite simple:

foreach($_POST as $__k => $__v) { $$__k = $__v; }
foreach($_GET as $__k => $__v) { $$__k = $__v; }

Well it seems since the last upgrade, they had also changed the register_long_arrays setting by demand of another project, and the login on the old site was broken. Quite simple this, just need to s/HTTP_SERVER_VARS/_SERVER/ (and similarly for POST/GET/COOKIE depending on your site).

Almost all was well now, except that the next complain was file uploads didn't work for several forms. I naively duplicated the _POST/_GET block above to $_FILES. No luck. Thus, my memory not remembering how file uploads used to work in early PHP, I set out to fix this.

I picked a good one to test with, and noticed that it used some of the very old PHP variables for file uploads (again globals). These files dated back to 1997 and PHP/FI!. The initial solution was to map $_FILES[x]['tmp_name'] to $x, and the rest of $_FILES[x][y] to $x_y. Great it seems to work now.

Except... one file upload form was still broken; it had multiple files allowed in a single form. Time for a more advanced hack:

# PHP/FI used this structure for files: http://www.php.net/manual/phpfi2.php#upload
foreach($_FILES as $__k => $__v) { 
  if(!is_array($__v['tmp_name'])) {
    $s = $__k;
    $$s = $__v['tmp_name'];
    $keys = array('name','size','type');
    foreach($keys as $k) {
      $s = $__k.'_'.$k;
      $$s = $__v[$k];
    }
  } else {
    for($i = 0; $i <= count($__v['tmp_name']); $i++) {
      if(defined($__v['tmp_name']) && defined($__v['tmp_name'][$i])) {
        $s = $__k.'['.$i.']';
        $$s = $__v['tmp_name'][$i];
        $keys = array('name','size','type');
        foreach($keys as $k) {
          $s = $__k.'_'.$k.'['.$i.']';
          $$s = $__v[$k][$i];
        }
      }
    }
  }
}

Thus I solved the problem, and had to relearn back how it used to be done with PHP/FI.

ubercoder

Adding 95th Percentile in Munin, without any patches: undocumented setting graph_args_after

Munin is commonly used to graph lots of systems stuff, however it lacks a common piece of functionality: 95th percentile.

The Munin bug tracker has ticket #443 sitting open for 7 years now, asking for this, and proving a not-great patch for it.

I really wanted to add 95th percentile to one of my complicated graphs (4 base variables, and 3 derived variables deep), but I didn't like the above patch either. Reading the Munin source to consider implementing VDEF properly, I noticed an undocumented setting: graph_args_after. It was introduced by ticket #1032, as a way of passing things directly to rrdtool-graph.

Clever use of this variable can pass in ANYTHING else to rrdtool-graph, including VDEF! So without further ado, here's how to put 95th percentile into individual Munin graphs, relatively easily.

# GRAPHNAME is the name of the graph you want to render on.
# VARNAME is the name of the new variable to call the Percentile line.
# DEF_VAR is the name of the CDEF or DEF variable from earlier in your graph definition.
# LEGEND is whatever legend you want to display on the graph for the line.
#   FYI Normal rrdtool escaping rules apply for legend (spaces, pound, slash).
${GRAPHNAME}.graph_args_after \
  VDEF:${VARNAME}=gcdef${DEF_VAR},95,PERCENT \
  LINE1:${VARNAME}\#999999:${LEGEND}:dashes \
  GPRINT:${VARNAME}:\%6.2lf\%s\\j
# Example of the above I'm using
bandwidth1.graph_args_after \
  VDEF:totalperc=gcdeftotal,95,PERCENT \
  LINE1:totalperc\#999999:95th\ Percentile\ (billable\):dashes \
  GPRINT:totalperc:\%6.2lf\%s\\j
ubercoder

APC PDU: resetting passwords with SNMP instead of a serial cable

So recently at one of the things I do for money, we got some used APC PDUs, AP7900. You can get them on eBay now for $100-$150USD, including shipping. They still sell the identical model, so there's nothing wrong with used gear. However, when they come, it's possible that the last owner didn't remove the passwords. There are some general guides on the Internet, but they almost exclusively revolve around using a custom serial cable.

While this guide is aimed at APC PDUs, APC actually uses a common embedded OS on many of their products, and the SNMP trick I have documented here was derived from their document: "Management Card Addendum", part number 990-6015A

Finding the device IP

If you're really lucky, the device will issue a BOOTP or DHCP request on boot. Then you can easily figure it out from there. If not, read on.

In the case of the PDUs, there is a large grey button. Hold it for 30 seconds, then release and press again, and it will cycle through displaying the IP. For other devices, you might been connect directly, and sniff for traffic to figure out the IP, or issue ARP requests for possible IPs (scanning 192.168.0.0/16, 172.16.0.0.0/12, 10.0.0.0/8 an hour or two with nmap for example).

Default passwords

You might as well try all the default passwords first, it wouldn't hurt you. The protocols you want to try are Telnet, SSH, HTTP, HTTPS. Usernames of apc, device, readonly (web interface only); all with a password of apc. If the username of apc works, you don't need the rest of this document.

If your firmware is really old, you should also try any username with the password of TENmanUFactOryPOWER. This will drop you into factory test mode, and you can read the password from the EEPROM this way (option 13, then look at offset 0x1D0, but realize that the offset is different in various revisions). In later revisions, this password is only usable with a serial cable.

SNMP

This is where we can get interesting. The PDUs come with a stock configuration of two SNMP communities: public and private. If the latter works, we'll use it to reset the device entirely. Test with: snmpget -v 1 -c private $IP SNMPv2-MIB::sysDescr.0, where $IP is the IP you found before.

Resetting the device with SNMP

If you've made it this far, you're stuck with an APC device, that you don't have administrator access with Telnet or SSH, but the SNMP private community does work

. You'll need to go and get the SNMP MIBS from APC next. Then you need a file from APC, it is a windows binary, but runs perfectly fine under WINE: i2c301.exe. Paste the file below to rpdu.ini:</p>
[NetworkTCP/IP]
SystemIP = 0.0.0.0
SubnetMask = 0.0.0.0
DefaultGateway = 0.0.0.0
Bootp = enabled
[NetworkTFTPClient]
RemoteIP = 0.0.0.0
[NetworkFTPClient]
RemoteIP = 0.0.0.0
RemoteUserName = apc
RemotePassword = apc
[NetworkFTPServer]
Access = enabled
Port = 21
[NetworkTelnet]
Access = enabled
Port = 23
[NetworkWeb]
Access = enabled
Port = 80
[NetworkSNMP]
Access = enabled
AccessControl1Community = public
AccessControl1NMSIP = 0.0.0.0
AccessControl1AccessType = read
AccessControl2Community = private
AccessControl2NMSIP = 0.0.0.0
AccessControl2AccessType = write
[NetworkDNS]
DNSServerIP = 0.0.0.0
[SystemID]
Name = Unknown
Contact = Unknown
Location = Unknown
[SystemDate/Time]
Date = 01/01/2014
Time = 12:00:00
[SystemUserManager]
Authentication = Basic
AutoLogout = 10
AdminUserName = apc
AdminPassword = apc
AdminAuthPhrase = admin user phrase
DeviceUserName = device
DevicePassword = apc
DeviceAuthPhrase = device user phrase

Run i2c301.exe rpdu.ini. This will generate apc.cfg. Setup up a TFTP server on your local subset, so that the IP on the PDU will be able to reach it. Place that apc.cfg in a path where it can be reached, I used /apc/apc.cfg in my case. Now run the following commands, giving a second or so between them.

snmpset -v 1 -c private $DEVICEIP PowerNet-MIB::mfiletransferConfigTFTPServerAddress.0 s $SERVERIP
snmpset -v 1 -c private $DEVICEIP PowerNet-MIB::mfiletransferConfigSettingsFilename.0 s /apc/apc.cfg
snmpset -v 1 -c private $DEVICEIP PowerNet-MIB::mfiletransferControlInitiateFileTransfer.0 i initiateFileTransferDownloadViaTFTP
snmpget -v 1 -c private $DEVICEIP PowerNet-MIB::mfiletransferStatusLastTransferResult.0

The PDU will proceed to reset at this point, it can take up to two minutes. You should then be able to log in with the default of apc/apc. Beware that if you're running DHCP it will get a new IP.

You should probably upgrade the system at this point. If you grabbed the updated firmware from APC, it's a self-extracting zipfile (unpack with unzip in linux). FTP to the PDU, with the default login. Switch to binary mode (important!), and upload apc_hw02_aos_374.bin. Afterwards the device will reboot again. Reconnect afterwards and upload apc_hw02_rpdu_374.bin, again in binary mode.

Locking down your PDU

Now that you've reset the password and upgrade your device, it's time to lock it down PROPERLY. Switch to SSHv2 only, disable FTP, change all SNMP communities.

Giving up your PDU

If you're getting rid of old PDUs, please remember to remove the passwords on them! It makes it easier for the next sysadmin to deploy the PDU, but also prevents leaking any passwords to an attacker with a serial cable and the factory password.

ubercoder

python-exec: solutions for package conflicts, and making it easier on users

Running into another system today with the fun python-exec block, I realise that while it has been discussed on the Gentoo mailing lists, and the forums slightly, there's been hardly any posts about it in the blog stream.

I'm not going to go into what caused it, but rather solutions for package conflicts in the short term, and also the long-term. The TL;DR general solution is running "emerge -1 dev-python/python-exec"

Here's the latest conflict I got on it; I wanted to install mirrorselect to compare some hosts

hostname / # emerge -pv mirrorselect

These are the packages that would be merged, in order:
[ebuild  N     ] net-analyzer/netselect-0.3-r3  22 kB
[ebuild     U  ] dev-lang/python-2.7.5-r3:2.7 [2.7.3-r2:2.7] USE="gdbm hardened%* ipv6 ncurses readline ssl threads (wide-unicode) xml -berkdb -build -doc -examples -sqlite -tk -wininst" 10,026 kB
[ebuild     U  ] dev-lang/python-3.2.5-r3:3.2 [3.2.3:3.2] USE="gdbm hardened%* ipv6 ncurses readline ssl threads (wide-unicode) xml -build -doc -examples -sqlite -tk -wininst" 9,020 kB
[ebuild  N     ] dev-lang/python-exec-2.0:2  PYTHON_TARGETS="(jython2_5) (jython2_7) (python2_6) (python2_7) (python3_2) (-pypy2_0) (-python3_3)" 79 kB
[ebuild  N     ] dev-util/dialog-1.2.20121230  USE="nls unicode -examples -minimal -static-libs" 422 kB
[ebuild  N     ] app-portage/mirrorselect-2.2.0.1  PYTHON_TARGETS="python2_7 python3_2 -python2_6 (-python3_3)" 13 kB
[blocks B      ] <dev-python/python-exec-10000 ("<dev-python/python-exec-10000" is blocking dev-lang/python-exec-2.0)

Total: 6 packages (2 upgrades, 4 new), Size of downloads: 19,580 kB
Conflict: 1 block (1 unsatisfied)

 * Error: The above package list contains packages which cannot be
 * installed at the same time on the same system.

  (dev-python/python-exec-0.2::gentoo, installed) pulled in by
    dev-python/python-exec[python_targets_python2_7(-),-python_single_target_python2_5(-),-python_single_target_python2_6(-),-python_single_target_python2_7(-)] required by (dev-libs/libxml2-2.9.0-r2::gentoo, installed)

  (dev-lang/python-exec-2.0::gentoo, ebuild scheduled for merge) pulled in by
    dev-lang/python-exec:=[python_targets_python2_6(-)?,python_targets_python2_7(-)?,python_targets_python3_2(-)?,-python_single_target_python2_6(-),-python_single_target_python2_7(-),-python_single_target_python3_2(-)] (dev-lang/python-exec:=[python_targets_python2_7(-),python_targets_python3_2(-),-python_single_target_python2_6(-),-python_single_target_python2_7(-),-python_single_target_python3_2(-)]) required by (dev-python/setuptools-0.6.30-r1::gentoo, installed)
    dev-lang/python-exec:=[python_targets_python2_6(-)?,python_targets_python2_7(-)?,python_targets_python3_2(-)?,python_targets_python3_3(-)?,-python_single_target_python2_6(-),-python_single_target_python2_7(-),-python_single_target_python3_2(-),-python_single_target_python3_3(-)] (dev-lang/python-exec:=[python_targets_python2_7(-),python_targets_python3_2(-),-python_single_target_python2_6(-),-python_single_target_python2_7(-),-python_single_target_python3_2(-),-python_single_target_python3_3(-)]) required by (app-portage/mirrorselect-2.2.0.1::gentoo, ebuild scheduled for merge)
    dev-lang/python-exec:=[python_targets_python2_6(-)?,python_targets_python2_7(-)?,python_targets_python3_2(-)?,python_targets_python3_3(-)?,python_targets_pypy2_0(-)?,-python_single_target_python2_6(-),-python_single_target_python2_7(-),-python_single_target_python3_2(-),-python_single_target_python3_3(-),-python_single_target_pypy2_0(-)] (dev-lang/python-exec:=[python_targets_python2_7(-),python_targets_python3_2(-),-python_single_target_python2_6(-),-python_single_target_python2_7(-),-python_single_target_python3_2(-),-python_single_target_python3_3(-),-python_single_target_pypy2_0(-)]) required by (virtual/python-argparse-1::gentoo, installed)

For more information about Blocked Packages, please refer to the following
section of the Gentoo Linux x86 Handbook (architecture is irrelevant):

http://www.gentoo.org/doc/en/handbook/handbook-x86.xml?full=1#blocked

This system has just dev-lang/python-exec-2.0 presently. We can reduce the conflict down to a minimal version as follows:

HOST / # emerge -pv  dev-lang/python-exec

These are the packages that would be merged, in order:
[ebuild  N     ] dev-lang/python-exec-2.0:2  PYTHON_TARGETS="(jython2_5) (jython2_7) (python2_6) (python2_7) (python3_2) (-pypy2_0) (-python3_3)" 79 kB
[blocks B      ] <dev-python/python-exec-10000 ("<dev-python/python-exec-10000" is blocking dev-lang/python-exec-2.0)

Total: 1 package (1 new), Size of downloads: 79 kB
Conflict: 1 block (1 unsatisfied)

 * Error: The above package list contains packages which cannot be
 * installed at the same time on the same system.

  (dev-python/python-exec-0.2::gentoo, installed) pulled in by
    dev-python/python-exec[python_targets_python2_7(-),-python_single_target_python2_5(-),-python_single_target_python2_6(-),-python_single_target_python2_7(-)] required by (dev-libs/libxml2-2.9.0-r2::gentoo, installed)

  (dev-lang/python-exec-2.0::gentoo, ebuild scheduled for merge) pulled in by
    dev-lang/python-exec
    dev-lang/python-exec:=[python_targets_python2_6(-)?,python_targets_python2_7(-)?,python_targets_python3_2(-)?,-python_single_target_python2_6(-),-python_single_target_python2_7(-),-python_single_target_python3_2(-)] (dev-lang/python-exec:=[python_targets_python2_7(-),python_targets_python3_2(-),-python_single_target_python2_6(-),-python_single_target_python2_7(-),-python_single_target_python3_2(-)]) required by (dev-python/setuptools-0.6.30-r1::gentoo, installed)
    dev-lang/python-exec:=[python_targets_python2_6(-)?,python_targets_python2_7(-)?,python_targets_python3_2(-)?,python_targets_python3_3(-)?,python_targets_pypy2_0(-)?,-python_single_target_python2_6(-),-python_single_target_python2_7(-),-python_single_target_python3_2(-),-python_single_target_python3_3(-),-python_single_target_pypy2_0(-)] (dev-lang/python-exec:=[python_targets_python2_7(-),python_targets_python3_2(-),-python_single_target_python2_6(-),-python_single_target_python2_7(-),-python_single_target_python3_2(-),-python_single_target_python3_3(-),-python_single_target_pypy2_0(-)]) required by (virtual/python-argparse-1::gentoo, installed)

For more information about Blocked Packages, please refer to the following
section of the Gentoo Linux x86 Handbook (architecture is irrelevant):

http://www.gentoo.org/doc/en/handbook/handbook-x86.xml?full=1#blocked

So what do we know?

  1. We have dev-python/python-exec-0.2 installed, it has the default SLOT=0.
  2. Here's what the packages in the tree right now look like:

    $ egrep '^R?DEPEND|^SLOT' dev-{python,lang}/python-exec/*ebuild
    dev-python/python-exec/python-exec-10000.1.ebuild:SLOT="0"
    dev-python/python-exec/python-exec-10000.1.ebuild:RDEPEND="dev-lang/python-exec:0[${PYTHON_USEDEP}]"
    dev-python/python-exec/python-exec-10000.2.ebuild:SLOT="2"
    dev-python/python-exec/python-exec-10000.2.ebuild:RDEPEND="dev-lang/python-exec:0[${PYTHON_USEDEP}]
    dev-lang/python-exec/python-exec-0.3.1.ebuild:SLOT="0"
    dev-lang/python-exec/python-exec-0.3.1.ebuild:RDEPEND="!<dev-python/python-exec-10000"
    dev-lang/python-exec/python-exec-0.9999.ebuild:SLOT="0"
    dev-lang/python-exec/python-exec-0.9999.ebuild:RDEPEND="!<dev-python/python-exec-10000"
    dev-lang/python-exec/python-exec-2.0.ebuild:SLOT="2"
    dev-lang/python-exec/python-exec-2.0.ebuild:RDEPEND="!<dev-python/python-exec-10000"
    dev-lang/python-exec/python-exec-2.9999.ebuild:SLOT="2"
    dev-lang/python-exec/python-exec-2.9999.ebuild:RDEPEND="!<dev-python/python-exec-10000"
    
  3. If we try to bring in dev-lang/python-exec directly, it will trigger the block, because our version of dev-python/python-exec is too old.
  4. This entire problem happens because the python*r1 eclasses bring in dev-lang/python-exec.

This leads to a simple user-actionable solution of "emerge -1 dev-python/python-exec", which will work as follows (notice that portage uninstalls the old version for us):

HOST / # emerge -pv  dev-python/python-exec
These are the packages that would be merged, in order:
[ebuild  N     ] dev-lang/python-exec-0.3.1  PYTHON_TARGETS="(jython2_5) (jython2_7) (python2_6) (python2_7) (python3_2) (-pypy2_0) (-python3_3)" 73 kB
[ebuild  N     ] dev-lang/python-exec-2.0:2  PYTHON_TARGETS="(jython2_5) (jython2_7) (python2_6) (python2_7) (python3_2) (-pypy2_0) (-python3_3)" 79 kB
[uninstall     ] dev-python/python-exec-0.2  PYTHON_TARGETS="(jython2_5) (jython2_7) python2_5 (python2_6) (python2_7) python3_1 (python3_2) -pypy1_9 (-pypy2_0) (-python3_3)" 
[blocks b      ] <dev-python/python-exec-10000 ("<dev-python/python-exec-10000" is blocking dev-lang/python-exec-2.0, dev-lang/python-exec-0.3.1)
[ebuild  NS    ] dev-python/python-exec-10000.2:2 [0.2:0] PYTHON_TARGETS="(jython2_5) (jython2_7) (python2_6) (python2_7) (python3_2) (-pypy2_0) (-python3_3)" 0 kB

Total: 3 packages (2 new, 1 in new slot, 1 uninstall), Size of downloads: 152 kB
Conflict: 1 block

The above is not actually the minimal solution, but it is the best general solution. The minimal solution is to include the slot on the package, but in future if the slots change further and the default slot is removed, this won't work anymore.

HOST / # emerge -pv dev-python/python-exec:0
These are the packages that would be merged, in order:
[ebuild  N     ] dev-lang/python-exec-0.3.1  PYTHON_TARGETS="(jython2_5) (jython2_7) (python2_6) (python2_7) (python3_2) (-pypy2_0) (-python3_3)" 73 kB
[ebuild     U  ] dev-python/python-exec-10000.1 [0.2] PYTHON_TARGETS="(jython2_5) (jython2_7) (python2_6) (python2_7) (python3_2) (-pypy2_0) (-python3_3) (-pypy1_9%) (-python2_5%*) (-python3_1%*)" 0 kB
[blocks b      ] <dev-python/python-exec-10000 ("<dev-python/python-exec-10000" is blocking dev-lang/python-exec-0.3.1)

Total: 2 packages (1 upgrade, 1 new), Size of downloads: 73 kB
Conflict: 1 block

But now the better question, is as developers, can we help users prevent this, and at what cost? If we don't mind new users having an extra placeholder package, then yes, we CAN actually solve it for the users. In all of the dev-lang/python-exec ebuilds we need to make this simple change:

 RDEPEND="!<dev-python/python-exec-10000"
+PDEPEND=">=dev-python/python-exec-10000:$SLOT"

This provides a nice solution as follows:

# emerge -pv dev-lang/python-exec
These are the packages that would be merged, in order:
[ebuild  N     ] dev-lang/python-exec-0.3.1  PYTHON_TARGETS="(jython2_5) (jython2_7) (python2_6) (python2_7) (python3_2) (-pypy2_0) (-python3_3)" 73 kB
[ebuild     U  ] dev-python/python-exec-10000.1 [0.2] PYTHON_TARGETS="(jython2_5) (jython2_7) (python2_6) (python2_7) (python3_2) (-pypy2_0) (-python3_3) (-pypy1_9%) (-python2_5%*) (-python3_1%*)" 0 kB
[blocks b      ] <dev-python/python-exec-10000 ("<dev-python/python-exec-10000" is blocking dev-lang/python-exec-2.0, dev-lang/python-exec-0.3.1)
[ebuild  N     ] dev-lang/python-exec-2.0:2  PYTHON_TARGETS="(jython2_5) (jython2_7) (python2_6) (python2_7) (python3_2) (-pypy2_0) (-python3_3)" 79 kB
[ebuild  NS    ] dev-python/python-exec-10000.2:2 [0.2:0] PYTHON_TARGETS="(jython2_5) (jython2_7) (python2_6) (python2_7) (python3_2) (-pypy2_0) (-python3_3)" 0 kB

Total: 4 packages (1 upgrade, 2 new, 1 in new slot), Size of downloads: 152 kB
Conflict: 1 block

All that remains is convincing the Python team to accept this solution for users...