If you are going to use machine certificates from AD to validate machines on your WiFi, it is important that the UPN of the machine is part of the list of SAN (Subject Alternate Name).
The UPNs (looks like machine$@domain.com ) are the identities of the user or computer, and what is used to lookup the device in AD.
In our installation, the UPN was not added to machine certificates per default, thus we could get EAP-PEAP and EAP-TLS for users working, but computer accounts would not validate.
This is one of the smaller things you need to remember. We were pointed in the right direction by an article about a Mac not being able to validate, as it did not have the UPN in its certificate. Since it was not paert of AD autoenrollment it had been issued with a different template than the other certs, and an OS X does not insert a Microsoft UPN in the SAN.
Why do we chose to use machine certificates to validate our WiFi ? Easy, it still requires one of our devices to get on the net. On the wired LAN we trust the device plugged into the wall. And we have network segmentation and filtering in place, so a WiFi connected machine will not have acces to anything but needed services (Citrix and jump servers, Internet proxy, AD infrastructure, file and print servers, IP telephony and the Intranet).
Special users with special network access requirements (the minority <10%) can use a VPN that will assign network access rights based on the users group membership. Here we have user validation and true 2-factor authentication.
Snippets of Technical stuff
tirsdag den 17. januar 2017
tirsdag den 28. juni 2016
DKIM and DMARC in an Exchange Online world
Stopping CEO-phising can be done using DMARC, but it will not cover lookalike domains.
SPF + DKIM are two ways to verify the sender of an e-mail, and DMARC is the next step, allowing you to say if SPF or DKIM has success then the mail is acceptable.
SPF operates on the SMTP envelope, the HELO/EHLO or MAIL FROM: domain, while DKIM is a digital signature on headers inside the e-mail.
With DMARC, it is validated that the visible From: address, also called header.from or RFC5233.from is matching the DKIM signature, or that that the domain of the SPF validation matches the RFC5222.from domain. DMARC validation is done on recipients mail server.
Enabling DKIM signing of outgoing messages in Office365 is trivial. And so is setting up SPF. Both requires access to create/modify DNS records.
This always worked fine to our knowledge. And we added dkim signatures to all outgoing mail on main.com and subbrand.com. This works perfectly on mail from peter@main.com and anne@subbrand.com.
The issue comes when anne@subbrand.com wants to answer on behalf of shared@main.com. Microsoft has decided to include the dkim of subbrand.com, and only that dkim in the mail sent out.
Looking at the dkim deployment guide http://www.dkim.org/specs/draft-ietf-dkim-deployment-11.html is clearly states in 6.5 that multiple signatures are allowed, but there is no agreement between recipients of which one to validate.
But looking in 7.3.2 it is clearly written, that is ADSP signature validation is to take place, the signature must match RFC5322.from - But Microsoft picks the dkim of the primary domain of the person sending the mail, not the dkim matching the RFC5322.from.
The result of this is, that when DMARC looks for dkim key, it will find a non-aligned dkim key, and these mails from shared@main.com will fail the DMARC dkim validation.
Luckily we use the same servers, so we can fall back to the SPF you might think. But no. If anne@subbrand.com uses the Exchange Online web interface, SPF / SMTP From seems to be postmaster@eur01-db5-obe.outbound.protection.outlook.com - And that does not match main.com.
And if she is using Desktop Outlook, then the smtp.mailfrom is set to anne@subbrand.com - Which is also not the same as main.com
Microsoft just recently started to support incoming DMARC validation, and outgoing dkim. And it seems likely that they have had a low adoption rate, as I do not see the above situation as something rare or very special. But it probably was not listed as a test case for the developer.
We can only hope that Microsoft will fix this soon, such that we can start enabling quarantine in our DMARC records. As it is now, too many mails going towards the Internet would be flagged as spam if we did it.
SPF + DKIM are two ways to verify the sender of an e-mail, and DMARC is the next step, allowing you to say if SPF or DKIM has success then the mail is acceptable.
SPF operates on the SMTP envelope, the HELO/EHLO or MAIL FROM: domain, while DKIM is a digital signature on headers inside the e-mail.
With DMARC, it is validated that the visible From: address, also called header.from or RFC5233.from is matching the DKIM signature, or that that the domain of the SPF validation matches the RFC5222.from domain. DMARC validation is done on recipients mail server.
Enabling DKIM signing of outgoing messages in Office365 is trivial. And so is setting up SPF. Both requires access to create/modify DNS records.
Our problems
We have a shared mailbox, call it shared@main.com - We have 2 users, peter@main.com and anne@subbrand.com that both handles mails in the shared@main.com mailbox.This always worked fine to our knowledge. And we added dkim signatures to all outgoing mail on main.com and subbrand.com. This works perfectly on mail from peter@main.com and anne@subbrand.com.
The issue comes when anne@subbrand.com wants to answer on behalf of shared@main.com. Microsoft has decided to include the dkim of subbrand.com, and only that dkim in the mail sent out.
Looking at the dkim deployment guide http://www.dkim.org/specs/draft-ietf-dkim-deployment-11.html is clearly states in 6.5 that multiple signatures are allowed, but there is no agreement between recipients of which one to validate.
But looking in 7.3.2 it is clearly written, that is ADSP signature validation is to take place, the signature must match RFC5322.from - But Microsoft picks the dkim of the primary domain of the person sending the mail, not the dkim matching the RFC5322.from.
The result of this is, that when DMARC looks for dkim key, it will find a non-aligned dkim key, and these mails from shared@main.com will fail the DMARC dkim validation.
Luckily we use the same servers, so we can fall back to the SPF you might think. But no. If anne@subbrand.com uses the Exchange Online web interface, SPF / SMTP From seems to be postmaster@eur01-db5-obe.outbound.protection.outlook.com - And that does not match main.com.
And if she is using Desktop Outlook, then the smtp.mailfrom is set to anne@subbrand.com - Which is also not the same as main.com
Conlusion
Microsoft just recently started to support incoming DMARC validation, and outgoing dkim. And it seems likely that they have had a low adoption rate, as I do not see the above situation as something rare or very special. But it probably was not listed as a test case for the developer.We can only hope that Microsoft will fix this soon, such that we can start enabling quarantine in our DMARC records. As it is now, too many mails going towards the Internet would be flagged as spam if we did it.
Workaround
To get things to work, and block at least CEO spam, we have had to create som transport rules. We know only the few cross-domain shared mailboxes are the issue, so we can whitelist these, if they have a dkim=pass and a signature from subbrand.com in the Authentication-Results
header.
Looking for something like: header.from=subbrand.com;main.com; dkim=pass will catch these. But you have to keep updating the transport rule.
But 3rd parties receiving our mails stills sees the p=none in the DMARC record. They can't implement the workaround.
Update 2016-07-01
Been communicating with Microsoft, both Premier support and Terry Zink thru a mailing list.
The issues comes from the SendOnBehalfOf, that never was part Internet mail standards, but is legacy from Lotus Notes / Exchange. You can not set these user rights in the GUI on EO, but can only set it thru Powershell.
If you change the permission on the mailbox to SendAs, then all identity of the author will be removed, and everything will be aligned with the shared mailbox. But this is a change we are probably not prepared to take, we will probably align the shared@main.com with the subbrand.com domain.
Another problem popped up, when people write to distribution groups that are not expanded at the client, when it then hits Exchange Online, the mail is "reborn" as a new virgin e-mail, with the protection.outlook.com smtp.helo, and no dkim at all (it was decided that is was internal in the first place). To Exchange Online the mail is new, from the outside, and unaligned in spf, and no dkim. So it will fail dmarc.
So we are still not where we can enable quarantine, but at least we have transport rules for filtering. If dmarc is fine, then SCL=-1. If not, we will do other checks on headers. Some key users will get marked as spam if not dmarc, so they can't use cloud distribution groups, only what is in the address book, and can be expanded locally.
fredag den 23. oktober 2015
Scanning for SSL certificates
I needed a way to quickly find out which SSL certificates we are using, so I cam up this this quick shell script to scan for SSL sites, and grab the certificates. It scans a class B net in around 10 minutes.
This small script scans for open port 443 using nmap, and uses openssl to connect to the server, extract the certificate, and parse it to human readable format. Then it prefixes all output lines with the IP address to make the output file grep-able.
This small script scans for open port 443 using nmap, and uses openssl to connect to the server, extract the certificate, and parse it to human readable format. Then it prefixes all output lines with the IP address to make the output file grep-able.
#!/bin/bash
# shell script to find and extract certificates from web servers
# (c) Copyright 2015 by Povl H. Pedersen - This script can be modified and used
# as you wish. Do not use as is without attribution.
pid=$$
cd /tmp
nmap -p80,443 -oA /tmp/sslscan-$pid -PS443 -sT $@ >/dev/null
# sed on the 2nd line is there to sanitize IP addresses to contain only digits + .
# This to avoid commands/garbage/race condition from the nmap output file to be executed.
cat /tmp/sslscan-$pid.gnmap | fgrep ' 443/open' | awk ' {print $2}' |
sed 's/[^0-9\.]*//g' |
awk ' {print "openssl s_client -connect "$1":443 </dev/null |openssl x509 -noout -text | sed \"s/^/"$1"\t/\""}' |
sh
rm -f /tmp/sslscan-$pid.*nmap
tirsdag den 2. juni 2015
Ethernet over existing cables
In an attempt to get IPTV to my living room, I have tried to bridge wiress with limited success. I have tried to use HomePlug AV2 with limited success, it was even worse than the wireless bridge.
Since I am routing an internal IP address, I also need to set up igmpproxy on the provider facing OpenWRT, and for good measure, source based routing at the outside OpenWRT as well.
Somewhat complicated to get all the things set up together. If just the Maermitek box would not limit multicast, the simple VLAN bridge would have worked.
My problem is, that I have 2 antenna cables near my TV. One for satellite and one for TV. They see to be stuck in the tube (probably the soap I used to get past the first bend), or I would replace one with ethernet cable. I have an injector that can mix TV (<865 MHz) and Sat on the same antenna cable, so I really do only need one.
But since both cables are stuck ethernet is not an option.
Since my cable company can provide high speed internet over antenna cables, I looked for a home solution. And there is actually one solution that works, the Marmitek IPTV Coax Pro, using the MoCA 1.1 standard. Running 100 Mbit ethernet over Coax. I tested it on both the Sat cable and the CATV cable, and both worked, giving 100 Mbit/s consistently.
I have found one issue, and that is, that multicast is limited to 10 Mbps, thus making it worthless for IPTV unless you can encapsulate the multicast.
My first attempt was to use simple VLAN bridging between an OpenWRT based router in each end. This worked, but still with the 10Mbps limit, as Multicast packets also gets a bit set in their MAC address. So the only solution would be a protocol with a new header.
Then I created a GRE tunnel between the 2 routers, and bridged the tunnel with the untagged VLAN (basicly just isolated switch port) in either end. This resulted in Multicast loopback, eating all bandwidth, and causing the Fiber router to transmit noise over a wide band of the antenna output, even in the high frequencies, and jamming both analog TV and the Marmitek.
So I decided to put up a DHCP server on the OpenWRT near the IPTV box, assign it an IP address, and use source based routing to transmit over the GRE tunnel.
I have found one issue, and that is, that multicast is limited to 10 Mbps, thus making it worthless for IPTV unless you can encapsulate the multicast.
My first attempt was to use simple VLAN bridging between an OpenWRT based router in each end. This worked, but still with the 10Mbps limit, as Multicast packets also gets a bit set in their MAC address. So the only solution would be a protocol with a new header.
Then I created a GRE tunnel between the 2 routers, and bridged the tunnel with the untagged VLAN (basicly just isolated switch port) in either end. This resulted in Multicast loopback, eating all bandwidth, and causing the Fiber router to transmit noise over a wide band of the antenna output, even in the high frequencies, and jamming both analog TV and the Marmitek.
So I decided to put up a DHCP server on the OpenWRT near the IPTV box, assign it an IP address, and use source based routing to transmit over the GRE tunnel.
Since I am routing an internal IP address, I also need to set up igmpproxy on the provider facing OpenWRT, and for good measure, source based routing at the outside OpenWRT as well.
Somewhat complicated to get all the things set up together. If just the Maermitek box would not limit multicast, the simple VLAN bridge would have worked.
mandag den 1. december 2014
Microsoft ADFS and logging
Microsoft ADFS 2.0 and 3.0 has a major shortcoming. It does not log the client IP address of failed authentication attempts.
I work in a company with many users who has more than 1 device, and sometime they even replace devices with the newest version. As a result, they often have devices configured to check mail using ActiveSync, which are not using the correct password. The result of this is, that their AD user account is logged out.
To help with this problem, Microsoft added a feature to ADFS to stop the user from being locked out of AD. But they did not help us with the main problem, helping us to identify the device which is attempting the bad logon. They try to hide the symptom, rather than help us fix the problem.
The only way to find the client IP address is to log all requests to the server using tcpdump / Wireshark, and then look at the POST data, or to replace ADFS with a 3rd party product, like something on Linux, designed for medium size and Enterprise usage. Seems like the target group for ADFS is still internal users and small worksgroups.
I work in a company with many users who has more than 1 device, and sometime they even replace devices with the newest version. As a result, they often have devices configured to check mail using ActiveSync, which are not using the correct password. The result of this is, that their AD user account is logged out.
To help with this problem, Microsoft added a feature to ADFS to stop the user from being locked out of AD. But they did not help us with the main problem, helping us to identify the device which is attempting the bad logon. They try to hide the symptom, rather than help us fix the problem.
The only way to find the client IP address is to log all requests to the server using tcpdump / Wireshark, and then look at the POST data, or to replace ADFS with a 3rd party product, like something on Linux, designed for medium size and Enterprise usage. Seems like the target group for ADFS is still internal users and small worksgroups.
onsdag den 19. november 2014
SQL Server 2008 R2 Express bad performance
SQL Server 2008 R2 is suffering from very bad performance if there are lots of logon requests. The symptom is that the LSASS.exe will eat up to 100% of one core, blocking the system.
The main reason for this is, that Microsoft increased security on the SQL Server 2008, and are using the PBKDF2 hash for passwords hashes and validation. This is very good, and the recommended hash to use for passwords.This hash is, by design, very slow to calculate, and it is even recommended to calculate it many times on its own output. This would make it very difficult for hackers to crack password given a hash.
Microsoft has a hotfix supposed to help on the performance, but it did not matter much for me.
I was using a 3rd party app that gave me these performance issues, and caused LSASS.EXE to be the system bottleneck.
There are three obvious solutions. And only the first will be an option for most people. And since extended support for SQL 2005 ends in April 2016, it is only a time limited solution, until the App vendor fixes their issues.
- Switch back to SQL Server 2005 Express. This solved the issue completely for me, and now the other processes can use all the cores, not being blocked waiting for SQL Authentication.
- Do like Microsoft has recommended for more than 10 years, switch to Windows integrated authentication. Problem is, that many apps do not support that.
- If you own the source code of the app, recoe it to keep a connection pool, and reuse SQL connections to minimize the number of logons. We can only expect logons to become more expensive.
tirsdag den 11. november 2014
Converting MKV to m4v for iTunes on Linux
Often when lookign for video, you might find MKV files on the Internet. Getting these into a format usable in iTunes / AppleTV can be a time consuming process, especially if you consider using HandBrake to convert the file.
Often you can use simple tools in Linux to convert the file from an MKV container to an MPEG-4 container, without actually reencoding the video. Often you would just need to ensure that there is a 2-channel AAC sound track.
when you get the MKV file, the first step is to see what is inside the file. For this we use mkvinfo:
mkvinfo myfile.mkv
and you will get track information about the contents:
...
|+ Segment tracks
| + A track
| + Track number: 1 (track ID for mkvmerge & mkvextract: 0)
| + Track UID: 12345
| + Track type: video
| + Lacing flag: 0
| + MinCache: 1
| + Codec ID: V_MPEG4/ISO/AVC
| + CodecPrivate, length 42 (h.264 profile: High @L4.1)
| + Default duration: 41.708ms (23.976 frames/fields per second for a video track)
| + Language: swe
| + Name: A Good Movie
| + Video track
| + Pixel width: 1920
| + Pixel height: 816
| + Display width: 1920
| + Display height: 816
Here we can see, that it is a movie already encoded in H.264 High Profile Level 4.1. AppleTV 3 only supports High Profile Level 4.0 or lower. Don't worry. My AppleTV 3 played the above stream without any trouble, so it might have been compressed in 4.0 anyway. iPhone 4S / iPad 2 supports High @L4.1. If there is any trouble with the video, or the video is too large (like 5GB/hour), you can recompress using Handbrake from https://handbrake.fr/. It runs on Linux, Windows and Mac, and has good defaults.
We also have to look for the sound track, and here we find a danish soundtrack as track 4 (3 for mkvextract), in DTS format only.
DTS is not supported by AppleTV, so we need to convert that to the needed 2 channel AAC, plus we can convert it to AC3 for better surround. Other videos you find will already have AC3, and you will not need to convert the streams, just extract them.
First we need to extract the tracks we need
mkvextract tracks myfile.mkv 0:video.h264 3:audio.dts
which extracts stream 0 in video.h264 and stream 3 in audio.dts.
Now we need to convert the sound tracks. This is a lossy operation, and it takes time:
avconv -i audio.dts -acodec ac3 -ac 6 -ab 640k audio.ac3 # can be skipped if already ac3
avconv -strict experimental -i audio.dts -dmix_mode 1 -ac 2 -b:a 160k -strict experimental audio.aac
Often you can use simple tools in Linux to convert the file from an MKV container to an MPEG-4 container, without actually reencoding the video. Often you would just need to ensure that there is a 2-channel AAC sound track.
when you get the MKV file, the first step is to see what is inside the file. For this we use mkvinfo:
mkvinfo myfile.mkv
and you will get track information about the contents:
...
|+ Segment tracks
| + A track
| + Track number: 1 (track ID for mkvmerge & mkvextract: 0)
| + Track UID: 12345
| + Track type: video
| + Lacing flag: 0
| + MinCache: 1
| + Codec ID: V_MPEG4/ISO/AVC
| + CodecPrivate, length 42 (h.264 profile: High @L4.1)
| + Default duration: 41.708ms (23.976 frames/fields per second for a video track)
| + Language: swe
| + Name: A Good Movie
| + Video track
| + Pixel width: 1920
| + Pixel height: 816
| + Display width: 1920
| + Display height: 816
...
| + A track
| + Track number: 4 (track ID for mkvmerge & mkvextract: 3)
| + Track UID: 16248958973911107725
| + Track type: audio
| + Default flag: 0
| + Codec ID: A_DTS
| + Default duration: 10.667ms (93.750 frames/fields per second for a video track)
| + Language: dan
| + Name: DTS
| + Audio track
| + Sampling frequency: 48000
| + Channels: 6
...
Here we can see, that it is a movie already encoded in H.264 High Profile Level 4.1. AppleTV 3 only supports High Profile Level 4.0 or lower. Don't worry. My AppleTV 3 played the above stream without any trouble, so it might have been compressed in 4.0 anyway. iPhone 4S / iPad 2 supports High @L4.1. If there is any trouble with the video, or the video is too large (like 5GB/hour), you can recompress using Handbrake from https://handbrake.fr/. It runs on Linux, Windows and Mac, and has good defaults.
We also have to look for the sound track, and here we find a danish soundtrack as track 4 (3 for mkvextract), in DTS format only.
DTS is not supported by AppleTV, so we need to convert that to the needed 2 channel AAC, plus we can convert it to AC3 for better surround. Other videos you find will already have AC3, and you will not need to convert the streams, just extract them.
First we need to extract the tracks we need
mkvextract tracks myfile.mkv 0:video.h264 3:audio.dts
which extracts stream 0 in video.h264 and stream 3 in audio.dts.
Now we need to convert the sound tracks. This is a lossy operation, and it takes time:
avconv -i audio.dts -acodec ac3 -ac 6 -ab 640k audio.ac3 # can be skipped if already ac3
avconv -strict experimental -i audio.dts -dmix_mode 1 -ac 2 -b:a 160k -strict experimental audio.aac
The second line should use Dolby Surround mode for the downmix.
If you found chapter markers you would like to keep, you can extract them like this:
mkvextract chapters -s myfile.mkv >chapters.txt
Now that we have all the streams we need, we can assemble (called mux) them into an MPEG-4 container.
This is done by MP4Box. Please use the FPS from the mkvinfo output, as MP4Box defaults to 25 FPS, and this would give video/audio out of sync.
MP4Box -fps 23.976 -add video.h264 -add "audio.aac:lang=dan" -add "audio.ac3:lang=dan" -chap "chapters.txt" myfile.m4v
If you want to add a subtitle for use with Apple devices, just add them like this (the hdlr option is important):
MP4Box -add myfile.srt:lang=eng:hdlr="sbtl:tx3g" myfile.m4v
And now you are done. Remember, with MP4Box you can always add missing tracks later.
If you have an MKV and wants to add subtitles to that, you need to create a new file with the tracks added, so it does not really merge:
mkvmerge -o Subbed-Movie.mkv Unsubbed-movie.mkv --sub-charset 0:UTF8 --language 0:dan --track-name 0:Dansk Mysubtitles-utf8.srt
MP4Box -add myfile.srt:lang=eng:hdlr="sbtl:tx3g" myfile.m4v
And now you are done. Remember, with MP4Box you can always add missing tracks later.
If you have an MKV and wants to add subtitles to that, you need to create a new file with the tracks added, so it does not really merge:
mkvmerge -o Subbed-Movie.mkv Unsubbed-movie.mkv --sub-charset 0:UTF8 --language 0:dan --track-name 0:Dansk Mysubtitles-utf8.srt
Abonner på:
Opslag (Atom)