Here's another useful bit of code used to generate formatted XML. This particular code - in C++ form - will eventually be incorporated into the Platypus API Service, when logging API calls is enabled.
Function PrettyPrintXML(strXML)
Dim objReader, objWriter
Set objReader = CreateObject("MSXML2.SAXXMLReader.6.0")
Set objWriter = CreateObject("MSXML2.MXXMLWriter.6.0")
objWriter.indent = True
objWriter.standalone = False
objWriter.omitXMLDeclaration = False
objWriter.encoding = "utf-8"
Set objReader.contentHandler = objWriter
Set objReader.dtdHandler = objWriter
Set objReader.errorHandler = objWriter
objReader.putProperty _
"http://xml.org/sax/properties/declaration-handler", _
objWriter
objReader.putProperty _
"http://xml.org/sax/properties/lexical-handler", _
objWriter
objReader.parse strXML
PrettyPrintXML = objWriter.output
End Function
The credit for this actually goes to Daniel Rikowski on StackOverflow. All I did was convert it into usable VBScript.
Random bits of information regarding the Platypus Billing System.
Not necessarily what you would call useful, but information none the less.
Friday, July 06, 2012
Wednesday, May 09, 2012
SQL Challenge - Part 1 - How to Find Contact Email Addresses in Platypus That May Cause an SMTP 550 Error.
Occasionally, if I am very lucky, someone will confront me with what I call the "SQL Challenge". Way back in the day of the early 2000's, one of my coworkers and I would try to come up with ways to stretch our SQL knowledge. Thanks to the "SQL Challenge", I am now capable of some fun acrobatics using just a little SQL.
The rules are fairly simple, once you have the actual challenge. The SQL must be encapsulated into a single SQL statment - be that select, delete, insert or update. It can have all the joins, subclauses, derived tables, and whatever else SQL provides, as long as it is included in a single statement. If inserting, updating or deleting, you are allowed a separate SQL statement for each table, but that is the only exception to the one statement rule. So, no cursors or loops. You get a bonuses for using ANSI SQL syntax and for how quickly you finish.
Today, I got one of those challenges. Let's start with the givens.
1. We have one or more email addresses stored in customer.email.
2. These emails can be comma delmited or semicolon delimited.
3. There may be spaces embedded before or after commas/semicolons.
4. The emails may be hosted internally or by a 3rd party provider, or a combination of the two.
5. A list of domains is stored in domain_item.domain. These domains are hosted internally.
6. A list of email addresses is stored in email_data.emailaddr. These emails are hosted internally.
7. When sending email messages to the email addresses stored on customer.email, if one of the email addresses is on a hosted domain but the email address is not in the list of hosted emails, the SMTP server will return a 550 (mailbox not found) error.
Now, the challenge. Given all of the above, we want to find a list of customers with email addresses on customer.email that will generate a 550. This means we want to find a list of email addresses attached to a hosted domain but are not a hosted email address. After a little less than an hour, I made this...
The rules are fairly simple, once you have the actual challenge. The SQL must be encapsulated into a single SQL statment - be that select, delete, insert or update. It can have all the joins, subclauses, derived tables, and whatever else SQL provides, as long as it is included in a single statement. If inserting, updating or deleting, you are allowed a separate SQL statement for each table, but that is the only exception to the one statement rule. So, no cursors or loops. You get a bonuses for using ANSI SQL syntax and for how quickly you finish.
Today, I got one of those challenges. Let's start with the givens.
1. We have one or more email addresses stored in customer.email.
2. These emails can be comma delmited or semicolon delimited.
3. There may be spaces embedded before or after commas/semicolons.
4. The emails may be hosted internally or by a 3rd party provider, or a combination of the two.
5. A list of domains is stored in domain_item.domain. These domains are hosted internally.
6. A list of email addresses is stored in email_data.emailaddr. These emails are hosted internally.
7. When sending email messages to the email addresses stored on customer.email, if one of the email addresses is on a hosted domain but the email address is not in the list of hosted emails, the SMTP server will return a 550 (mailbox not found) error.
Now, the challenge. Given all of the above, we want to find a list of customers with email addresses on customer.email that will generate a 550. This means we want to find a list of email addresses attached to a hosted domain but are not a hosted email address. After a little less than an hour, I made this...
select
/* The
unique customer id */
customer.id,
/* The
delimited list of contact email addresses */
customer.email,
/* A
hosted domain matching one of the email addresses */
domain_item.domain,
/* The
number of email addresses in customer.email */
len(customer.email) - len(replace(customer.email, '@', '')) as email_count,
/* The
number of emails that match the hosted domain */
(
len(replace(';'+replace(customer.email,' ','')+';',',', ';'))
- len(replace(replace(';'+replace(customer.email,' ','')+';',',', ';'), '@'+domain_item.domain+';',''))
) / len('@' + domain_item.domain + ';') as domain_count,
/* The
number of hosted email addresses that match the hosted domain */
(
select count(*)
from email_data
where customer.id = email_data.d_custid
and ';'+replace(replace(customer.email,' ',''),',',';')+';' like '%;'+email_data.emailaddr+';%'
and email_data.emailaddr + ';' like '%@' + domain_item.domain + ';'
) as match_count
from customer
inner join domain_item
on replace(';'+replace(customer.email,' ','')+';',',',';') like '%@'+domain_item.domain+';%'
/* Strip
the hosted domain from the delimited list */
/*
Comparing the difference divided by the length of the hosted domain name */
/* This
will let us know how many emails _should_ be hosted for this domain */
where
(
len(replace(';'+replace(customer.email,' ','')+';',',',';'))
- len(replace(replace(';'+replace(customer.email,' ','')+';',',',';'), '@'+domain_item.domain+';', ''))
) / len('@'+domain_item.domain+';')
/*
Compare the _should_ total against the _actual_ total */
/* If
they don't match exactly, include the customer in the result */
<> (
/* Find
the number of hosted email addresses for this customer */
/* That
match one of the email addresses on the delimited list */
/* This
will let us know how many emails are _actually_ hosted for this domain */
select count(*)
from email_data
where customer.id = email_data.d_custid
and replace(';'+replace(customer.email,' ','')+';',',',';') like '%;'+email_data.emailaddr+';%'
and email_data.emailaddr+';' like '%@'+domain_item.domain+';'
)
go
Sunday, May 06, 2012
Base64 Encoding using MSXML
At the time we were developing the base64 encoder - covered in my last post, there already existed a quick and dirty way to perform base64 encoding/decoding through MSXML. Even though we have decided not to pursue this for any Platypus development, it might benefit someone. Here are examples for using MSXML for base64 encoding/decoding from within VBScript.
Function Base64Encode (strData)
Set objDocument = CreateObject("MSXML2.DOMDocument")
Set objNode = objDocument.createElement("document")
objNode.dataType = "bin.base64"
objNode.nodeTypedValue = strData
Base64Encode = objNode.text
End Function
Function Base64Decode (strData)
Set objDocument = CreateObject("MSXML2.DOMDocument")
Set objNode = objDocument.createElement("document")
objNode.dataType = "bin.base64"
objNode.text = strData
Base64Decode = objNode.nodeTypedValue
End Function
This is covered in more detail in Microsoft's Knowledge Base Article named "How To Create XML Documents with Binary Data in Visual Basic", along with examples for hex encoding and date encoding (ISO-8601).
Labels:
Application Compatibility,
Decidering,
MSXML,
VBScript
Saturday, May 05, 2012
Base64 Encoding in Platypus
Base64 encoding was originally designed for transmitting binary data over SMTP. When email servers were first created, the SMTP protocol restricted the allowed characters to the first 7bits of ASCII's 8bit character set. Among other things, this prevented binary files from being sent over email.
Then came MIME and with it, base64 encoding, which converted binary files into readable text suitable for sending over email. With base64, for every 3 bytes of binary data input there are 4 bytes of encoded text returned. Compared with hex (base16) encoding, the other dominant encoding mechanism at the time, base64 encoding is much more efficient in terms of storage by lowering the amount of space needed for attachments from 2:1 (for hex) to 4:3 (for base64). Since its creation, base64 encoding has been included in a variety of internet protocols, including SMTP authentication, HTTP authentication, XML and is used in various subsystems of the Platypus Billing System.
While Visual FoxPro - the base language for the Platypus Billing System - includes features for encoding and decoding using base64 through STRCONV(), that function does not follow the line length requirement in the RFC 2045 specification, which limits encoded lines to a maximum of 76 characters. For example, if data were encoded into 100 characters, two lines of encoded data would result. The first line 76 characters, followed by CRLF and then the remaining 24 characters.
At first, this limitation was not a problem. We used 3rd party ActiveX/COM libraries, named EncodeX and SmtpX from Mabry Software, for encoding email attachments and sending emails, respectively. The other use of base64 within Platypus - our own attachment feature first included in Platypus v3 - did not have to interact with any external systems, so RFC 2045 compliance was not a requirement.
Eventually, we began work on creating a shared library that could wrap all our SMTP functionality for Platypus v5. Up until this point, there were separate classes and libraries for the Platypus client and API. For one, the Platypus client used ActiveX forms of the libraries, while the Platypus API used the COM forms. Of course, this made maintaining that code doubly difficult and it was prone to inconsistent behavior between the Platypus client and API. Because of that inconsistency, limitations in Visual FoxPro, and compatibility problems, we dropped the ActiveX form of SmtpX and completely dropped the EncodeX libraries.
Unfortunately, because FoxPro's STRCONV() function did not strictly follow MIME, sending attachments encoded using this function would often generate errors. The strange thing was that these errors were not universal across all mail servers. Some were more strict than others. Anyway, to move forward, we decided to develop our own library for base64 encoding.
This was my first C++ project with Platypus. Up until this point, my development experience - excluding school - was limited to Visual FoxPro, Visual Basic, and SQL. After an excessive amount of research, we found a decent example in the public domain which performed encoding quickly enough and adapted it into a COM library using Visual C++ 6.0. That library has been in use since that time for a majority of Platypus v4 and all of Platypus v5 and v6. Not bad for a piece of of code 10 years old. With the release of Platypus v7, that COM library has been phased out in favor of the Mailbee SMTP COM library, which handles encoding internally; but the old base64 COM library is still included in our installation sets as part of a fallback feature.
The original C++ source for the encoder has been phased out of our other components, as well; being replaced with a much improved library. This new library takes advantage of Microsoft's SecureCRT (secure C runtime) guidelines to prevent buffer overflows, now includes an efficient decoder, and has been fuzz tested to ensure stability. It is currently in use within the updated Mailpopper for re-encoding email attachments that have been decoded by the Mailbee POP3 COM library, which pulls emails into the Helpdesk features in Platypus.
Monday, April 30, 2012
Configuring DEP on Windows Server 2008 R2 from a 32bit NSIS Installer
If the title is hard to understand, let me just shorten it to this. The woes of compatibility testing!
Included with the Platypus Billing System is a number of 3rd party ActiveX libraries. Most of the time, these libraries are wondrous things. Unfortunately, one, in particular, has a problem with Windows DEP.
Now, upon installation of the Platypus client, we can get around this by configuring the Application Compatibility settings of our executable to bypass DEP. So, whenever the exe is launched, the OS will not trap DEP problems for our process. We do all of this by simply writing to the appropriate location in the registry during the installation process.
Anyway, one of the more ingenious and infamous features from Windows Vista is redirection. You see, whenever a 32bit application writes to HKEY_LOCAL_MACHINE in the registry on 64bit Windows, it is actually writing to HKEY_LOCAL_MACHINE\Software\Wow6432Node. So, whenever our software was installed, because the installation set is a 32bit application itself, when it writes to the registry, it is actually writing to the 32bit subset of the registry. This includes those pesky Application Compatibility settings I mentioned. Up until the latest version of Windows Server, the OS didn't care whether you configure the compatibility settings in the 32bit or 64bit registry. It checked both when the process started. Here's an example of code we used for configuring DEP*.
Starting with Windows Server 2008 R2, Application Compatibility settings in the 32bit registry now appear to be ignored. Meaning the Platypus client will now crash whenever our troublesome ActiveX library rears its head, because our installation set can't get to the 64bit registry. Not easily, anyway.
To get around this problem, the creators of NSIS - who provide us with the software for making installation sets - were kind enough to take advantage of some features in 64bit Windows that allows us to get around redirection. Mostly.
Unfortunately, whenever redirection is disabled, it only changes file redirection - not registry redirection. Since our installation set is writing directly to the registry, disabling file redirection doesn't help us. So, we have to find a way to write to the 64bit registry through file redirection. This leads us to reg.exe - a nifty little utility that came with Windows XP that allows the registry to be accessed from the command line. Since 64bit Windows has a 32bit reg.exe and a 64bit reg.exe, disabling file redirection should allow us to call the 64bit copy directly, which doesn't have that pesky 32bit registry limitation.
All we need to do is to check for 64bit Windows, disable file redirect, run reg.exe and then reenable file redirection. That gives us code that looks something like this, which actually does work*.
Finally! It works! Total time of this endeavor is a little over an hour. Now I'm off for more compatibility testing. Windows 8 and Windows 2012 up next!
Included with the Platypus Billing System is a number of 3rd party ActiveX libraries. Most of the time, these libraries are wondrous things. Unfortunately, one, in particular, has a problem with Windows DEP.
Now, upon installation of the Platypus client, we can get around this by configuring the Application Compatibility settings of our executable to bypass DEP. So, whenever the exe is launched, the OS will not trap DEP problems for our process. We do all of this by simply writing to the appropriate location in the registry during the installation process.
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\LayersAll the way from Windows XP up to Windows Server 2008, we simply had to write to the registry. It didn't even matter whether the OS is x86 or x64. It has just worked.
Anyway, one of the more ingenious and infamous features from Windows Vista is redirection. You see, whenever a 32bit application writes to HKEY_LOCAL_MACHINE in the registry on 64bit Windows, it is actually writing to HKEY_LOCAL_MACHINE\Software\Wow6432Node. So, whenever our software was installed, because the installation set is a 32bit application itself, when it writes to the registry, it is actually writing to the 32bit subset of the registry. This includes those pesky Application Compatibility settings I mentioned. Up until the latest version of Windows Server, the OS didn't care whether you configure the compatibility settings in the 32bit or 64bit registry. It checked both when the process started. Here's an example of code we used for configuring DEP*.
!include LogicLib.nsh
!include WinVer.nsh
var /GLOBAL NSISRegPath
StrCpy $NSISRegPath "SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers"
${IF} ${AtLeastWinXP}
WriteRegStr HKLM "$NSISRegPath" "$INSTDIR\program.exe" "DisableNXShowUI"
${ENDIF}
To get around this problem, the creators of NSIS - who provide us with the software for making installation sets - were kind enough to take advantage of some features in 64bit Windows that allows us to get around redirection. Mostly.
Unfortunately, whenever redirection is disabled, it only changes file redirection - not registry redirection. Since our installation set is writing directly to the registry, disabling file redirection doesn't help us. So, we have to find a way to write to the 64bit registry through file redirection. This leads us to reg.exe - a nifty little utility that came with Windows XP that allows the registry to be accessed from the command line. Since 64bit Windows has a 32bit reg.exe and a 64bit reg.exe, disabling file redirection should allow us to call the 64bit copy directly, which doesn't have that pesky 32bit registry limitation.
All we need to do is to check for 64bit Windows, disable file redirect, run reg.exe and then reenable file redirection. That gives us code that looks something like this, which actually does work*.
var /GLOBAL NSISRegPath
StrCpy $NSISRegPath "SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers"
var /GLOBAL EXERegPath
StrCpy $EXERegPath "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers"
${IF} ${AtLeastWinXP}
WriteRegStr HKLM "$NSISRegPath" "$INSTDIR\program.exe" "DisableNXShowUI"
${IF} ${RunningX64}
${DisableX64FSRedirection}
ExecWait '$SYSDIR\reg.exe add "$EXERegPath" /v "$INSTDIR\program.exe" /d "DisableNXShowUI"'
${EnableX64FSRedirection}
${ENDIF}
${ENDIF}
Finally! It works! Total time of this endeavor is a little over an hour. Now I'm off for more compatibility testing. Windows 8 and Windows 2012 up next!
Tuesday, December 20, 2011
What's your backup solution?
Over the past twelve years I can remember three separate hard drive failures; two on work computers. Those are the ones I can remember, anyway. After the second one, I started getting serious about backups.
I had written a VBScript back in 2000 that I used to archive my library every day using 7-zip - a poor man's version of source control. At first, I burned those archives to cd-rom every month or so. In fact, I still have a few of those cd-roms floating around my home office. Now, this didn't just include my archive; it included all the software I had dealt with - specifically the software used for integrations. It was mostly RADIUS servers, email servers and a couple ftp servers - along with license keys, notes, sql, help files, contact names and phone numbers (basically everything needed to start over in case of catastrophic loss).
Eventually, I moved from writing software integrations to a proper software developer. I was actually doing double duty then, writing both software integrations and adding new features for the Platypus client. So, I got to enjoy the wonderous world of source control for Platypus, but integrations still stayed in zip's.
Of course, after my second hard drive crash, that backup solution just wasn't enough. Cd-rom's take up physical space and required keeping track of where they all were. They did contain sensitive information, after all. So, I had to be careful with what I did with them. Anyway, what I really wanted was a daily reusable system that I could use to back up everything - including source code not ready for check-in - that preferably didn't involve cd-rw's. So, I bought my first USB drive - the Soyo Cigar Pro 128MB USB Flash Memory Drive for $72.94 on Feb 7, 2003. I know it isn't much now, but back then it was an amazing thing - solid state engineering at its best.
At that point, I reconfigured my VBScript to zip and copy everything over to the flash drive, which I dutifully ran every day before going home. I even used a combination where I would fill up the flash drive, and then copy everything to a cd-rom. Actually, I only kept Friday backups in an effort to cut back on space, which meant I only had to burn a cd-rom every three to six months. That was much more acceptable than a new cd-rom every month, and it made sure I had valid daily backups with a decent historical archive.
From there, I moved from using a desktop to a laptop for development and integrations fell by the wayside, but I still performed a daily backup of all the source code I was writing on a daily basis. I moved to a 256MB flash drive, then to a 512MB and finally to a 2GB. Since I didn't need integrations backed up, I dropped cd-rom's altogether and kept only source code backups for a month or two. If I needed something older, it belonged in source control.
Today, all of that has changed. I'm back to using a desktop and I don't use either flash drives or cd-roms for backups. I now use a combination of RAID 1 and IDrive and VMWare for backups. Sure, it's only RAID 1 and it is Matrix RAID, at that (instead of a hardware based RAID); but that was enough when one of the drives died suddenly. Probably the best decision I made when getting my desktop from Dell was to get RAID pre-installed. Since then, there haven't been any problems, but it is nice to know I am covered in case of catastrophe. Even better, is the fact that I no longer have to put forth any effort to ensure my data is backed up.
I had written a VBScript back in 2000 that I used to archive my library every day using 7-zip - a poor man's version of source control. At first, I burned those archives to cd-rom every month or so. In fact, I still have a few of those cd-roms floating around my home office. Now, this didn't just include my archive; it included all the software I had dealt with - specifically the software used for integrations. It was mostly RADIUS servers, email servers and a couple ftp servers - along with license keys, notes, sql, help files, contact names and phone numbers (basically everything needed to start over in case of catastrophic loss).
Eventually, I moved from writing software integrations to a proper software developer. I was actually doing double duty then, writing both software integrations and adding new features for the Platypus client. So, I got to enjoy the wonderous world of source control for Platypus, but integrations still stayed in zip's.
Of course, after my second hard drive crash, that backup solution just wasn't enough. Cd-rom's take up physical space and required keeping track of where they all were. They did contain sensitive information, after all. So, I had to be careful with what I did with them. Anyway, what I really wanted was a daily reusable system that I could use to back up everything - including source code not ready for check-in - that preferably didn't involve cd-rw's. So, I bought my first USB drive - the Soyo Cigar Pro 128MB USB Flash Memory Drive for $72.94 on Feb 7, 2003. I know it isn't much now, but back then it was an amazing thing - solid state engineering at its best.
At that point, I reconfigured my VBScript to zip and copy everything over to the flash drive, which I dutifully ran every day before going home. I even used a combination where I would fill up the flash drive, and then copy everything to a cd-rom. Actually, I only kept Friday backups in an effort to cut back on space, which meant I only had to burn a cd-rom every three to six months. That was much more acceptable than a new cd-rom every month, and it made sure I had valid daily backups with a decent historical archive.
From there, I moved from using a desktop to a laptop for development and integrations fell by the wayside, but I still performed a daily backup of all the source code I was writing on a daily basis. I moved to a 256MB flash drive, then to a 512MB and finally to a 2GB. Since I didn't need integrations backed up, I dropped cd-rom's altogether and kept only source code backups for a month or two. If I needed something older, it belonged in source control.
Today, all of that has changed. I'm back to using a desktop and I don't use either flash drives or cd-roms for backups. I now use a combination of RAID 1 and IDrive and VMWare for backups. Sure, it's only RAID 1 and it is Matrix RAID, at that (instead of a hardware based RAID); but that was enough when one of the drives died suddenly. Probably the best decision I made when getting my desktop from Dell was to get RAID pre-installed. Since then, there haven't been any problems, but it is nice to know I am covered in case of catastrophe. Even better, is the fact that I no longer have to put forth any effort to ensure my data is backed up.
Wednesday, August 03, 2011
Goodbye Firefox. It was nice knowing you.
I've finally given up on Firefox. It was a great browser for its time, but it has become unusable for me. Don't get me wrong, I would still love to use it, but it has become too cumbersome.
First a little background. This all takes place on my laptop, which has a mobile version of the i7 processor and 4GB of RAM; neither of which should be scoffed at. Plus, I am running the latest and greatest version of Firefox. Even with all that powerhouse, Firefox has been *pull my hair out* frustrating lately.
This problem has been going on for a weeks now, but today was my final breaking point. Firefox has been running for a few/many days. I'm not exactly sure how long - probably over a week. Regardless, I had over 30 tabs open. Every now and then - especially when closing or switching tabs - the browser would hang for around a minute. Really!? Closing a tab takes a minute? That can't be right. I have a freakin' i7 processor and oodles of RAM.
I checked Task Scheduler and Process Explorer. Nothing was taking up any significant amount of processor. Even Firefox was in single digits - as it should be with an i7 processor. Ok. Well, first thing, plugins. I disabled all but what I would consider essential plugins (I actually did this last week, but I wanted to see how it went before I made a rash judgement). The ones I kept are Adobe Acrobat, DivX, Quicktime, Flash, Silverlight and Adblock Plus. All well known and fairly stable plugins. Nothing wacky.
Disabling plugins had no effect. At least nothing I could notice. Ok. Maybe it's still one of the plugins. Checked Task Scheduler and Process Explorer and killed all the plugin-container.exe processes I could find. Still no effect. In some of these cases, there weren't even any plugin-container.exe processes running. This leaves one major thing that I can think of. Mozilla needs to learn an age old lesson for large applications. Shared memory is bad - meaning you absolutely have to have some sort of separation or isolation between components (Now this doesn't go for every application, but browsers definitely fit this bill). Linux learned the lesson ages and ages ago by saying "don't create monolithic applications". Intel, Microsoft, and Google learned this lesson. When will Mozilla? Well, as I just found out, they are; but it's a long way from being done. (Plus, I'm already half way through with my rant. Why stop now?)
Browsers really are the becoming the end-all-be-all of applications. While browsers don't actually do everything, they do provide a gateway for anything to be done. Kind of like what having a modem was back in the 80's. If you had one, you had access to the amazing world of wasting time. Even if it was just AOL or Compuserve, you were "connected". The same goes for browsers. If you have one - a relatively modern one - you can do your banking, your shopping, talk face to face; you can even watch freakin' movies. You can do all that and more - even at the same time. There's even a "programming" language built in.
Because they can dynamically do so much all at once, there is so much less room for error. If something goes wrong in one place, it shouldn't drag the system down with it. There's no reason for that. Firefox has released it's first set of features - OOPP (Out of Process Plugin) - that begins to deal with the problem. This prevents 3rd party code from causing Firefox to crash, but that isn't far enough. Each tab should be its own process. This is my number one favorite feature of Chrome. Opening and closing tabs is nigh instantaneous. (Yes, I realize Chrome hides the window/tab and does the real shutdown behind the scenes, but it's a separate process and doesn't slow down the rest of the "application".)
Now, there is a down side to this. A separate process potentially means longer start-up times, more processor time, more RAM and sharing data across tabs/processes has got to be a nightmare. As a user, I don't really care about it. I just want the application to respond reasonably well, and Chrome's GUI does this better than any other browser out there. There's even a Firebug plugin for Chrome. So, I may even give up on Firefox for dev purposes, excepting some QA test cases.
Now, as I found out about three-quarters of the way through my rant, Mozilla has the Electrolysis (or e10s) project under way; but it's a long way from being done. When they finish, I'll reconsider switching back to Firefox; but until then, it's Chrome all the way.
First a little background. This all takes place on my laptop, which has a mobile version of the i7 processor and 4GB of RAM; neither of which should be scoffed at. Plus, I am running the latest and greatest version of Firefox. Even with all that powerhouse, Firefox has been *pull my hair out* frustrating lately.
This problem has been going on for a weeks now, but today was my final breaking point. Firefox has been running for a few/many days. I'm not exactly sure how long - probably over a week. Regardless, I had over 30 tabs open. Every now and then - especially when closing or switching tabs - the browser would hang for around a minute. Really!? Closing a tab takes a minute? That can't be right. I have a freakin' i7 processor and oodles of RAM.
I checked Task Scheduler and Process Explorer. Nothing was taking up any significant amount of processor. Even Firefox was in single digits - as it should be with an i7 processor. Ok. Well, first thing, plugins. I disabled all but what I would consider essential plugins (I actually did this last week, but I wanted to see how it went before I made a rash judgement). The ones I kept are Adobe Acrobat, DivX, Quicktime, Flash, Silverlight and Adblock Plus. All well known and fairly stable plugins. Nothing wacky.
Disabling plugins had no effect. At least nothing I could notice. Ok. Maybe it's still one of the plugins. Checked Task Scheduler and Process Explorer and killed all the plugin-container.exe processes I could find. Still no effect. In some of these cases, there weren't even any plugin-container.exe processes running. This leaves one major thing that I can think of. Mozilla needs to learn an age old lesson for large applications. Shared memory is bad - meaning you absolutely have to have some sort of separation or isolation between components (Now this doesn't go for every application, but browsers definitely fit this bill). Linux learned the lesson ages and ages ago by saying "don't create monolithic applications". Intel, Microsoft, and Google learned this lesson. When will Mozilla? Well, as I just found out, they are; but it's a long way from being done. (Plus, I'm already half way through with my rant. Why stop now?)
Browsers really are the becoming the end-all-be-all of applications. While browsers don't actually do everything, they do provide a gateway for anything to be done. Kind of like what having a modem was back in the 80's. If you had one, you had access to the amazing world of wasting time. Even if it was just AOL or Compuserve, you were "connected". The same goes for browsers. If you have one - a relatively modern one - you can do your banking, your shopping, talk face to face; you can even watch freakin' movies. You can do all that and more - even at the same time. There's even a "programming" language built in.
Because they can dynamically do so much all at once, there is so much less room for error. If something goes wrong in one place, it shouldn't drag the system down with it. There's no reason for that. Firefox has released it's first set of features - OOPP (Out of Process Plugin) - that begins to deal with the problem. This prevents 3rd party code from causing Firefox to crash, but that isn't far enough. Each tab should be its own process. This is my number one favorite feature of Chrome. Opening and closing tabs is nigh instantaneous. (Yes, I realize Chrome hides the window/tab and does the real shutdown behind the scenes, but it's a separate process and doesn't slow down the rest of the "application".)
Now, there is a down side to this. A separate process potentially means longer start-up times, more processor time, more RAM and sharing data across tabs/processes has got to be a nightmare. As a user, I don't really care about it. I just want the application to respond reasonably well, and Chrome's GUI does this better than any other browser out there. There's even a Firebug plugin for Chrome. So, I may even give up on Firefox for dev purposes, excepting some QA test cases.
Now, as I found out about three-quarters of the way through my rant, Mozilla has the Electrolysis (or e10s) project under way; but it's a long way from being done. When they finish, I'll reconsider switching back to Firefox; but until then, it's Chrome all the way.
Monday, July 18, 2011
Dynamic Linked Libraries (DLL) vs Static Libraries
We no longer use DLL's with the Platypus Billing System, except where absolutely necessary. In some cases, with high level languages (such as Visual Basic 6 and Visual FoxPro) and 3rd party libraries (such as OpenSSL) written in C/C++, we have no other choice. Plus there are ActiveX/COM libraries (such as MSXML, Mailbee, DBI, and Crystal Reports), which cannot be linked to statically. But, in many cases, it can be avoided.
Without getting into an argument over which is better or worse, when the stability of a product is on the line, having DLL's creates another point of failure. For that reason alone, it was more important for us to statically link our C/C++ code where possible. Sure, the binaries may be larger and updates basically meant a re-install; but it has been well worth these minor difficulties.
Since the switch to Visual C++ 2005 and static linking back in 2009, the number of C++ dependency issues we have encountered are still in the single digits - and that is only because of ActiveX/COM. Just to relay the point, here a few of the specific cases I have encountered over the past few years.
Case #1: PHP vs Pidgin
Both PHP and Pidgin include a spell check library - Aspell - in the form of aspell-15.dll. Since the web pages for our product are written in PHP, I - of course - need PHP installed on my dev machine. Also, I have Pidgin installed for chatting with technical support - or anyone else at work when a face-to-face confab is not required.
Now, normally these two products are not in conflict and everything works swimmingly. But, one day, I decided to grab one of the newer - more stable, secure, and compiled in VC 2008 - PHP editions from the PHP for Windows. Everything worked fine at first. Then, as happens, I needed to reboot. Afterwards, Pidgin crashed every time I tried to start it up.
After yanking my hair out using Dependency Walker and Process Monitor, I finally figured out that it was because of Aspell. I renamed aspell-15.dll in the PHP folder and everything started working again. Because PHP was in the system path, Pidgin was loading the PHP version of the dll instead of the one in the Pidgin folder. It shouldn't have done this, and I could find no logical reasoning for it, but that is what was happening.
Regardless, I didn't have the time to look into it further. I knew the cause and could bypass it. Spell check is nice, but completely unnecessary for IM. So, I uninstalled Pidgin, and reinstalled it without the spell check feature. Problem solved - or, at least, dealt with.
Case #2: PHP vs OpenSSL
With our product, we include a COM DLL (tu_app.dll) for interacting with the Tucows Email Service. This COM library was written in Visual C++ 6.0 and was linked to some severely old versions of the OpenSSL libraries. Again, because I decided to go mucking about with my installation of PHP, I broke yet another thing on my dev machine.
I was performing some fixes for our integration with Tucows Email and had to do some unit tests. Every time I tried to load the COM object, the program would crash spectacularly. After some more hair pulling, I traced it down to the OpenSSL libraries. I replaced the DLL's installed by PHP with the one included in our installation set and it started working again.
Problem solved? No, definitely not. While crashng my IM client is one thing, the possibility that someone could install a special version of PHP on the same machine as our product - which is normally the case - is another. Only the older versions of the OpenSSL libraries would work with our COM library.
Those OpenSSL libraries were ancient and would not pass any scrutiny when it came to PA-DSS. Plus, having our product crash because we required using outdated and insecure versions of the OpenSSL libraries was completely unacceptable. So, we ported the code from Visual C++ 6.0 to Visual C++ 2005 and statically linked to OpenSSL. Now, the problem was solved.
Case #3: ATL Vulnerability
When this problem first came out, I was working on a separate major rewrite/port of our C++ code - specifically a Windows service for hosting our API - from Visual C++ 6.0 to Visual C++ 2005. I had everything working. It was beautiful and simple code, it compiled without warnings, it had no memory leaks, and it passed every test I threw at it.
Next, came compatibility testing. After making an installation set for our product, I started testing on all the operating systems we supported - Windows 2000 up to Windows Vista/2008. Upon start up on Windows Vista and 2008, the service immediately crashed. It worked fine on Windows 2000 and XP.
I checked the Eventlog and found a side-by-side dependency error. Considering this was my first venture into something newer than VC6, I wasn't fully competent with Application Manifests at the time. So, I had no idea what this error really meant.
I checked the installation set to make sure it included the Visual C++ runtime - and it did. I checked the installation log (and Add/Remove Programs) to make sure it installed - and it did. After even more hair pulling, I found out about the ATL update.
The worst part was, no installation set for the Visual C++ runtime existed - which included the ATL fix. There is now, but there wasn't at the time (or I just suck at using a search engine). So, I could try to install the runtime files manually, but that involved a huge amount of effort and testing on all those OS's. Especially, for something that had to be done that night. I needed to finish my testing so we could release the next day (and possibly grab some sleep that night). Plus, I had no idea what DLL's to install or where to install them or how to deal with WinSxS from a NSIS installation set.
So, my only option was to switch to static linking. No more dependencies. No unnecessary points of failure. Or more simply, no more DLL Hell. Finally, problem solved and a few hours sleep before the release.
Case #4: DLL Preloading Vulnerability
This is a generic definition of case #1. A DLL from an unexpected location is loaded instead of the intended one. While, case #1 wasn't officially an attack, it did crash a program and caused me a couple hours of unneeded stress.
Now, in cases like this, there are officially two ways to deal with it. First, you can mitigate the attack surface by using SetDLLDirectory. This limits the possibility of an attack, but doesn't eliminate it as I found out. The second way is to do away with the problem altogether by static linking. I am a firm believer that elimination is far better than mitigation - especially considering it requires no actual code change and reduces the amount of installation set testing required.
Without getting into an argument over which is better or worse, when the stability of a product is on the line, having DLL's creates another point of failure. For that reason alone, it was more important for us to statically link our C/C++ code where possible. Sure, the binaries may be larger and updates basically meant a re-install; but it has been well worth these minor difficulties.
Since the switch to Visual C++ 2005 and static linking back in 2009, the number of C++ dependency issues we have encountered are still in the single digits - and that is only because of ActiveX/COM. Just to relay the point, here a few of the specific cases I have encountered over the past few years.
Case #1: PHP vs Pidgin
Both PHP and Pidgin include a spell check library - Aspell - in the form of aspell-15.dll. Since the web pages for our product are written in PHP, I - of course - need PHP installed on my dev machine. Also, I have Pidgin installed for chatting with technical support - or anyone else at work when a face-to-face confab is not required.
Now, normally these two products are not in conflict and everything works swimmingly. But, one day, I decided to grab one of the newer - more stable, secure, and compiled in VC 2008 - PHP editions from the PHP for Windows. Everything worked fine at first. Then, as happens, I needed to reboot. Afterwards, Pidgin crashed every time I tried to start it up.
After yanking my hair out using Dependency Walker and Process Monitor, I finally figured out that it was because of Aspell. I renamed aspell-15.dll in the PHP folder and everything started working again. Because PHP was in the system path, Pidgin was loading the PHP version of the dll instead of the one in the Pidgin folder. It shouldn't have done this, and I could find no logical reasoning for it, but that is what was happening.
Regardless, I didn't have the time to look into it further. I knew the cause and could bypass it. Spell check is nice, but completely unnecessary for IM. So, I uninstalled Pidgin, and reinstalled it without the spell check feature. Problem solved - or, at least, dealt with.
Case #2: PHP vs OpenSSL
With our product, we include a COM DLL (tu_app.dll) for interacting with the Tucows Email Service. This COM library was written in Visual C++ 6.0 and was linked to some severely old versions of the OpenSSL libraries. Again, because I decided to go mucking about with my installation of PHP, I broke yet another thing on my dev machine.
I was performing some fixes for our integration with Tucows Email and had to do some unit tests. Every time I tried to load the COM object, the program would crash spectacularly. After some more hair pulling, I traced it down to the OpenSSL libraries. I replaced the DLL's installed by PHP with the one included in our installation set and it started working again.
Problem solved? No, definitely not. While crashng my IM client is one thing, the possibility that someone could install a special version of PHP on the same machine as our product - which is normally the case - is another. Only the older versions of the OpenSSL libraries would work with our COM library.
Those OpenSSL libraries were ancient and would not pass any scrutiny when it came to PA-DSS. Plus, having our product crash because we required using outdated and insecure versions of the OpenSSL libraries was completely unacceptable. So, we ported the code from Visual C++ 6.0 to Visual C++ 2005 and statically linked to OpenSSL. Now, the problem was solved.
Case #3: ATL Vulnerability
When this problem first came out, I was working on a separate major rewrite/port of our C++ code - specifically a Windows service for hosting our API - from Visual C++ 6.0 to Visual C++ 2005. I had everything working. It was beautiful and simple code, it compiled without warnings, it had no memory leaks, and it passed every test I threw at it.
Next, came compatibility testing. After making an installation set for our product, I started testing on all the operating systems we supported - Windows 2000 up to Windows Vista/2008. Upon start up on Windows Vista and 2008, the service immediately crashed. It worked fine on Windows 2000 and XP.
I checked the Eventlog and found a side-by-side dependency error. Considering this was my first venture into something newer than VC6, I wasn't fully competent with Application Manifests at the time. So, I had no idea what this error really meant.
I checked the installation set to make sure it included the Visual C++ runtime - and it did. I checked the installation log (and Add/Remove Programs) to make sure it installed - and it did. After even more hair pulling, I found out about the ATL update.
The worst part was, no installation set for the Visual C++ runtime existed - which included the ATL fix. There is now, but there wasn't at the time (or I just suck at using a search engine). So, I could try to install the runtime files manually, but that involved a huge amount of effort and testing on all those OS's. Especially, for something that had to be done that night. I needed to finish my testing so we could release the next day (and possibly grab some sleep that night). Plus, I had no idea what DLL's to install or where to install them or how to deal with WinSxS from a NSIS installation set.
So, my only option was to switch to static linking. No more dependencies. No unnecessary points of failure. Or more simply, no more DLL Hell. Finally, problem solved and a few hours sleep before the release.
Case #4: DLL Preloading Vulnerability
This is a generic definition of case #1. A DLL from an unexpected location is loaded instead of the intended one. While, case #1 wasn't officially an attack, it did crash a program and caused me a couple hours of unneeded stress.
Now, in cases like this, there are officially two ways to deal with it. First, you can mitigate the attack surface by using SetDLLDirectory. This limits the possibility of an attack, but doesn't eliminate it as I found out. The second way is to do away with the problem altogether by static linking. I am a firm believer that elimination is far better than mitigation - especially considering it requires no actual code change and reduces the amount of installation set testing required.
Sunday, July 17, 2011
Application Manifests & Visual Basic 6
Embedding Application Manifests in Visual Basic 6 binaries is really easy. Microsoft even wrote a command line utility just for this purpose. Well, not for VB6 but for binaries in general. The Manifest Tool (mt.exe), which is included in both Visual Studio and the Windows SDK is extremely simple to use. The best part is that it handles any necessary padding and can update just about any binary with no fuss.
Here's an example command line using the naming convention of Visual Studio 2005, where the manifest filename contains the program name with ".intermediate.manifest" appended.
And now for a story...
When we were first confronted with the need for manifests - specifically for triggering UAC prompts in our configuration tools written in VB6 - I performed my due diligence. I researched the topic thoroughly, I took examples of the manifests provided by Microsoft, and I tested on each and every Windows OS we supported - Windows 2000 all the way up to Windows Vista/2008.
The one thing I couldn't find was a simple way to embed the manifest in those executables, that could be easily automated. The articles I read covered GUI tools like XN Resource Editor and Resource Hacker, writing my own C/++ program using UpdateResource, a long winding route using the Resource Compiler (rc.exe) or finally just leaving the manifest as a separate file.
Even though manifests had been around since Windows XP, there wasn't a single article I could find that even mentioned the Manifest Tool. Even in the Microsoft articles I have found, there is never any mention of VB6 and the Manifest Tool together. Of course, VB6 was considered legacy by the time Application Manifests came out; so, while frustrating, I can't really blame them. I can blame my search engine skills, but that's no fun.
Anyway, all but the last option were complicated, convoluted, or required too much effort. We, of course, finally settled on that last option - using external manifests - out of necessity to get something out the door. It wasn't until we started migrating code from Visual C++ 6.0 over to Visual Studio 2005 that I noticed the mt.exe command line in the build log, which was over a year and half later. Now, along with code signing, through signtool.exe the Manifest Tool is included in much of our automated build process, and I am much happier for discovering it.
Here's an example command line using the naming convention of Visual Studio 2005, where the manifest filename contains the program name with ".intermediate.manifest" appended.
mt.exe -nologo -manifest "program.exe.intermediate.manifest" -outputresource:"program.exe;#1And now for a story...
When we were first confronted with the need for manifests - specifically for triggering UAC prompts in our configuration tools written in VB6 - I performed my due diligence. I researched the topic thoroughly, I took examples of the manifests provided by Microsoft, and I tested on each and every Windows OS we supported - Windows 2000 all the way up to Windows Vista/2008.
The one thing I couldn't find was a simple way to embed the manifest in those executables, that could be easily automated. The articles I read covered GUI tools like XN Resource Editor and Resource Hacker, writing my own C/++ program using UpdateResource, a long winding route using the Resource Compiler (rc.exe) or finally just leaving the manifest as a separate file.
Even though manifests had been around since Windows XP, there wasn't a single article I could find that even mentioned the Manifest Tool. Even in the Microsoft articles I have found, there is never any mention of VB6 and the Manifest Tool together. Of course, VB6 was considered legacy by the time Application Manifests came out; so, while frustrating, I can't really blame them. I can blame my search engine skills, but that's no fun.
Anyway, all but the last option were complicated, convoluted, or required too much effort. We, of course, finally settled on that last option - using external manifests - out of necessity to get something out the door. It wasn't until we started migrating code from Visual C++ 6.0 over to Visual Studio 2005 that I noticed the mt.exe command line in the build log, which was over a year and half later. Now, along with code signing, through signtool.exe the Manifest Tool is included in much of our automated build process, and I am much happier for discovering it.
Saturday, July 09, 2011
Taxes Are Hard (Texas Tax Edition) - Part 2
Taxes are hard; and as Texas has proved so far, Texas taxes are extraordinarily difficult. Even after all the specifics laid out in part 1 of this post, a great deal of information is left before I can even begin talking about what it actually means.
State Tax
Regardless of whether something is sold by a company in Texas or is sold to a customer in Texas, the state tax of 6.25% always applies. This is perhaps the simplest feature of Texas taxes. If it weren't for the $25 internet access exemption or the 20% web development/information service exemption, Texas state taxes would be easy.
Local Sales Tax
Beyond the state tax, the next type of tax that must be calculated is the local sales tax. This tax is based on the location of the seller's place of business.
Local Use Tax
Next, after both the state tax and the local sales tax are calculated comes the local use tax. This tax is based on the location of the customer or where the customer receives the goods and services.
Further Complications of Local Taxes
Both the local sales and local use taxes are further broken down into four different locale types: city, county, special purpose districts and transit. So, combined, this creates nine - count them nine - different types of taxes that go into the calculation.
Next, after all that breakdown, city tax rates are different for each city, county tax rates are different for each county and so on. All combined, the local tax rate - for both local sales and local use taxes - cannot exceed 2%. This limiting factor of 2% works on a priority basis, adding each subsequent type to the total until the 2% is reached. If adding one of the local tax rates exceeds the 2% limit, only the amount necessary to reach the 2% limit is used. The order of the local tax rates is as follows.
Also, while the terms "sales" and "use" imply a different set of rules or percentages, they actually don't. Local taxes rates are the same for both sales and use. Plus, when reporting local taxes, they are done based on the different locale types: city, county, special purpose district, and transit. Beyond the initial calculation, the terms sales and use are not applied (at least, to my knowledge).
Finally, along those same lines, duplicates are ignored. Local sales taxes for the seller are calculated; then, local use taxes for the customer are calculated - ignoring any the local use tax for duplicates. For example, if both the company and the customer are located within the same county, the local county sales tax will apply but the local county use tax will not; or more apply put, the local city tax is only calculated once.
The information provided in this article is just a summary of the Texas local tax calculations. The Window on State Government web site provides an article - and is the basis for this post - which covers may different scenarios with specific examples for each in the February 2009 Local Sales and Use Tax Bulletin - Guidelines for Collecting Local Sales and Use Tax.
State Tax
Regardless of whether something is sold by a company in Texas or is sold to a customer in Texas, the state tax of 6.25% always applies. This is perhaps the simplest feature of Texas taxes. If it weren't for the $25 internet access exemption or the 20% web development/information service exemption, Texas state taxes would be easy.
Local Sales Tax
Beyond the state tax, the next type of tax that must be calculated is the local sales tax. This tax is based on the location of the seller's place of business.
Local Use Tax
Next, after both the state tax and the local sales tax are calculated comes the local use tax. This tax is based on the location of the customer or where the customer receives the goods and services.
Further Complications of Local Taxes
Both the local sales and local use taxes are further broken down into four different locale types: city, county, special purpose districts and transit. So, combined, this creates nine - count them nine - different types of taxes that go into the calculation.
Next, after all that breakdown, city tax rates are different for each city, county tax rates are different for each county and so on. All combined, the local tax rate - for both local sales and local use taxes - cannot exceed 2%. This limiting factor of 2% works on a priority basis, adding each subsequent type to the total until the 2% is reached. If adding one of the local tax rates exceeds the 2% limit, only the amount necessary to reach the 2% limit is used. The order of the local tax rates is as follows.
- local city sales tax
- local county sales tax
- local special purpose district sales tax
- local transit sales tax
- local city use tax
- local county use tax
- local special purpose district use tax
- local transit use tax
Also, while the terms "sales" and "use" imply a different set of rules or percentages, they actually don't. Local taxes rates are the same for both sales and use. Plus, when reporting local taxes, they are done based on the different locale types: city, county, special purpose district, and transit. Beyond the initial calculation, the terms sales and use are not applied (at least, to my knowledge).
Finally, along those same lines, duplicates are ignored. Local sales taxes for the seller are calculated; then, local use taxes for the customer are calculated - ignoring any the local use tax for duplicates. For example, if both the company and the customer are located within the same county, the local county sales tax will apply but the local county use tax will not; or more apply put, the local city tax is only calculated once.
The information provided in this article is just a summary of the Texas local tax calculations. The Window on State Government web site provides an article - and is the basis for this post - which covers may different scenarios with specific examples for each in the February 2009 Local Sales and Use Tax Bulletin - Guidelines for Collecting Local Sales and Use Tax.
Taxes Are Hard (Texas Tax Edition) - Part 1
Taxes are hard, and along the lines of "don't mess with Texas", internet taxes in Texas go above and beyond the norm. The basics of Texas internet taxes are as follows.
Note: Because of the complications of Texas taxes, this article is broken down into several manageable posts. The first two, of which, specifically cover a summary of rules for Texas taxes.
Internet Access Service
Note: Because of the complications of Texas taxes, this article is broken down into several manageable posts. The first two, of which, specifically cover a summary of rules for Texas taxes.
Internet Access Service
- Internet access services (including enhancements such as static ip addresses, email and instant messaging) are taxable. (Texas Tax Code - Section 151.0101(a)(16))
- Up to $25 for internet access per month are tax exempt. (Texas Tax Code - Section 151.325)
- Web development and information services (including web page design and web hosting) are taxable. (Texas Tax Code - Section 151.0101(a)(12))
- Twenty percent (20%) of the value of data processing and information services are tax exempt. (Texas Tax Code - Section 151.351)
- Taxes for these services went into effect on October 1, 1999. (Document 9906479L)
- Late fees are not taxable, but reinstatement (reactivation) fees are taxable (Document 200001959L)
- The $25 exemption applies per purchaser not per account. So, if the purchaser has multiple accounts, up to $25 is exempt for all the accounts combined. (Texas Tax Code - Section 151.325(c))
- A seller who uses catalogs or the Internet to sell goods is treated the same as any other seller of taxable items. If you purchase merchandise through a catalog or the Internet from a seller located in Texas, you owe Texas sales tax on the purchase. If you purchase merchandise through a catalog or the Internet from a seller located outside of Texas and use the taxable item in Texas, then you owe Texas use tax on the purchase. An out-of-state mail-order company or an Internet company may hold a Texas Sales and Use tax permit and collect Texas tax. If the out-of-state seller does not have a Texas permit or does not collect Texas use tax, the use tax is due and payable by the purchaser. (Texas Sales Tax FAQ)
- Internet access taxes for Texas are grandfathered under Internet Tax Freedom Act and are considered exempt from that act.
- The state tax rate is 6.25%. (Window on State Government - Texas Taxes - Sales and Use Tax)
- The city tax rate cannot exceed 2%. (Window on State Government - Texas Taxes - Sales and Use Tax)
- The county tax rate cannot exceed 1.5%. (Window on State Government - Texas Taxes - Sales and Use Tax)
- The combined tax rate cannot exceed 8.25%, where 6.25% is state taxes and the remaining 2% is comprised of local sales or use taxes in the form of city, county, transit and special district taxes. (Window on State Government - Texas Taxes - Sales and Use Tax)
Friday, July 08, 2011
Application Manifests, UAC & Windows Vista - Part 2
Way back in the days of Windows 2000, new guidelines were introduced to improve the overall usability and security of the OS. Although, in order to maintain compatibility, none of these guidelines were actually enforced at the time. Then, when Windows Vista was released, the game changed. Those guidelines were no longer optional, but there was a backup plan.
New Guidelines for Windows 2000
With the introduction of UAC, it was no longer possible to write to Program Files without administrative privileges, but somehow older programs still worked. This is where the the requestedExecutionLevel flag in Application Manifests comes into play. If the flag is missing from the manifest, the OS does its fancy footwork of guessing whether the program should be elevated. If the program is not elevated, then the program is run in compatibility mode. Any attempts to write to a system folder - such as Program Files or System32 - or write to a system registry hive - such as HKLM - will result in virtualization.
While you may think your program is writing to HKLM, it's not. Not really. It may look like it to you. Even to the program itself, it will appear that way, because it is actually reading from a virtualized section of the registry. This means that in your own little world, everything is working as you expected, but it is not directly affecting the OS. So, any changes you make to your little world don't affect anyone else who logs into the OS. This is one of the key points that makes UAC really work. Without this, Vista would truly be the nightmare that you hear in those Mac commercials.
Still, even with virtualization in place, some wacky things can occur. for example, there's a specific case I found with the SDelete utility provided by SysInternals, which runs in compatibility/virtualization mode. If the program is run as unpriviledged and is passed a file located in Program Files, it simultaneously finds the file and cannot find the file. It actually finds the file in Program Files, and then attempts to open a file of the same name in the Virtual Store. So, after all that, it reports a success saying the file was correctly wiped and deleted, but the file in Program Files is never touched.
User Account Control (UAC)
New Guidelines for Windows 2000
- Only binaries or read only files should be stored in Program Files.
- Any documents or user created files should be stored in My Documents.
- Temporary files should be created in the user (or system) Temp folder.
- Anything else should be written to Application Data.
With the introduction of UAC, it was no longer possible to write to Program Files without administrative privileges, but somehow older programs still worked. This is where the the requestedExecutionLevel flag in Application Manifests comes into play. If the flag is missing from the manifest, the OS does its fancy footwork of guessing whether the program should be elevated. If the program is not elevated, then the program is run in compatibility mode. Any attempts to write to a system folder - such as Program Files or System32 - or write to a system registry hive - such as HKLM - will result in virtualization.
While you may think your program is writing to HKLM, it's not. Not really. It may look like it to you. Even to the program itself, it will appear that way, because it is actually reading from a virtualized section of the registry. This means that in your own little world, everything is working as you expected, but it is not directly affecting the OS. So, any changes you make to your little world don't affect anyone else who logs into the OS. This is one of the key points that makes UAC really work. Without this, Vista would truly be the nightmare that you hear in those Mac commercials.
Still, even with virtualization in place, some wacky things can occur. for example, there's a specific case I found with the SDelete utility provided by SysInternals, which runs in compatibility/virtualization mode. If the program is run as unpriviledged and is passed a file located in Program Files, it simultaneously finds the file and cannot find the file. It actually finds the file in Program Files, and then attempts to open a file of the same name in the Virtual Store. So, after all that, it reports a success saying the file was correctly wiped and deleted, but the file in Program Files is never touched.
User Account Control (UAC)
Application Manifests, UAC & Windows Vista - Part 1
Support for Manifest Resources continued with Windows Vista. In addition to the new Shell Common Controls library and Side-by-side assemblies introduced in Windows XP, support for High-DPI Applications and User Account Control (UAC) was added to the Manifest Resource. The most scrutinized feature of the two, of course, is User Account Control, which provided additional security to the operating system by specifying within the program what type of permission was required for using the program.
With Windows XP and lower, an administrator was always logged in as an administrator and a restricted user was always logged in as a restricted user. This meant that any program executed by an administrator would always be executed with full administrative rights. While any program executed by a restricted user would never have administrative privileges.
With the introduction of UAC, if a program required administrative privileges, such as writing to HKLM in the registry or writing to a system folder, the OS would notify the user of the required privileges before executing potentially damaging code - even when logged in as an administrative user. The OS determined this through the requestedExecutionLevel flag in the application manifest. If this flag was missing, the OS would do some fancy footwork and make a guess.
The key point here is that by embedding the required privilege within the program, a user - even a user with administrative privileges - did not have to work in a completely unprotected environment. This helps restrict access to much of the operating system when performing day-to-day tasks; and when malicious software is introduced on the machine, the restricted access given to that software should help mitigate any damage.
Vista Styles
With Windows XP and lower, an administrator was always logged in as an administrator and a restricted user was always logged in as a restricted user. This meant that any program executed by an administrator would always be executed with full administrative rights. While any program executed by a restricted user would never have administrative privileges.
With the introduction of UAC, if a program required administrative privileges, such as writing to HKLM in the registry or writing to a system folder, the OS would notify the user of the required privileges before executing potentially damaging code - even when logged in as an administrative user. The OS determined this through the requestedExecutionLevel flag in the application manifest. If this flag was missing, the OS would do some fancy footwork and make a guess.
The key point here is that by embedding the required privilege within the program, a user - even a user with administrative privileges - did not have to work in a completely unprotected environment. This helps restrict access to much of the operating system when performing day-to-day tasks; and when malicious software is introduced on the machine, the restricted access given to that software should help mitigate any damage.
Vista Styles
- Microsoft Download - Windows Vista/7 User Interface Guidelines
- MSDN Library - How to Create the Best User Experience for Your Application
- MSDN Library - The Windows Vista and Windows Server 2008 Developer Story: Windows Vista Application Development Requirements for User Account Control (UAC)
- MSDN Library - Windows Vista Application Development Requirements for User Account Control Compatibility
- MSDN Library - Designing UAC Applications for Windows Vista - Step 6: Create and Embed an Application Manifest (UAC)
- Microsoft TechNet - User Account Control Step-by-Step Guide
- Microsoft TechNet - Understanding and Configuring User Account Control in Windows Vista
Application Manifests & Windows XP
Starting with Windows XP, Microsoft created extensive changes to the Windows Manager and Shell Common Controls. These changes included a whole new set of controls that could be used with programs to provide a fresh look and to provide additional Theme support. Older programs would continue to use the older Shell Common Controls library (comctl32.dll), while newer programs would use the new Shell Common Controls library (uxctrl.dll). By having two separate libraries instead of updating the existing Shell Common Controls library, compatibility issues with older programs were completely bypassed.[1].
To determine whether programs would use the new library was accomplished through a new PE Resource type called a Manifest. This Manifest Resource - in its original form - contained a list of DLL's and other resources that would be used with the executable. This allowed easier control over dependencies by specifying which versions of a DLL would be used with the program - specifically which Shell Common Controls library. So, newer programs, which included a specific Manifest Resource, would use the uxctrl.dll; and older ones, which did not have a Manifest Resource, would continue to use comctl32.dll.
This created the concept of Side-by-side Assemblies - a new feature in Windows XP - where multiple versions of a DLL could be installed on a machine, and depending on the Manifest Resource within the binary only a specific version of that DLL would be used by the program. For several reasons Microsoft has shifted away from supporting side-by-side assemblies - at least in C/C++ runtime libraries - in favor of including the version of the appended to the file name.[2]
XP Styles
To determine whether programs would use the new library was accomplished through a new PE Resource type called a Manifest. This Manifest Resource - in its original form - contained a list of DLL's and other resources that would be used with the executable. This allowed easier control over dependencies by specifying which versions of a DLL would be used with the program - specifically which Shell Common Controls library. So, newer programs, which included a specific Manifest Resource, would use the uxctrl.dll; and older ones, which did not have a Manifest Resource, would continue to use comctl32.dll.
This created the concept of Side-by-side Assemblies - a new feature in Windows XP - where multiple versions of a DLL could be installed on a machine, and depending on the Manifest Resource within the binary only a specific version of that DLL would be used by the program. For several reasons Microsoft has shifted away from supporting side-by-side assemblies - at least in C/C++ runtime libraries - in favor of including the version of the appended to the file name.[2]
XP Styles
- Microsoft Download - Windows 2000/XP User Interface Guidelines
- The Old New Thing - The history of the Windows XP common controls
- MSDN Library - Using Windows XP Visual Styles With Controls on Windows Forms
- MSDN Library - Side-by-side Assemblies
- MSDN Library - Dynamic-Link Library Search Order
- Wikipedia - Side-by-side Assembly
- MSDN Library - Using Side-by-Side Assemblies as a Resource
- MSDN Library - Enabling an Assembly in an Application Hosting a DLL, Extension, or Control Panel
- MSDN Library - Enabling an Assembly in an Application Without Extensions
- MSDN Library - Per-application Configuration on Windows Server 2003
- MSDN Library - Per-application Configuration on Windows XP
- MSDN Library - Troubleshooting C/C++ Isolated Applications and Side-by-side Assemblies
- MSDN Library - Concepts of Isolated Applications and Side-by-side Assemblies
- Microsoft Support - A new CWDIllegalInDllSearch registry entry is available to control the DLL search path algorithm
- Microsoft Support - Some third-party applications that use external manifest files stop working after you install Windows server 2003 Service Pack 1
- MSDN Library - Dynamic-Link Library Redirection
- Nothing ventured, nothing gained - DLLs and resource ID 2 manifests
Tuesday, June 28, 2011
Starting Over
The original attempt at this blog was somewhat of a failure. As happens all too often lack of interest or devotion from either the writer or the audience will cause the blog to fail. So... Here we go again.
Subscribe to:
Comments (Atom)