Sunday, June 02, 2013

SQL Server 2012 Install Fails on Server Core 2008 R2 VHD - Object reference not set to an instance of an object.

So, I'm going through the Administering Microsoft SQL Server 2012 Databases book for the 70-462 certification exam. The only way, I could quickly get access to Windows Server Core for the test exercises was to download the VHD provided by Microsoft.

Well, strange thing. It is missing a critical registry key. Without this key, SQL Server fails with a horribly vague error. To make it more fun, while the description tells me to look in a file named summary.txt, there is no file named summary.txt anywhere.
The following error occurred:
Object reference not set to an instance of an object.

Error result: -2147467261
Result facility code: 0
Result error code: 16387

Please review the summary.txt log for further details
Slightly more - but also unhelpful - information can be found in the Component Updater log file generated by the installer.
Exception summary:
The following is an exception stack listing the exceptions in outermost to innermost order
Inner exceptions are being indented

Exception type: System.NullReferenceException
    Message: 
        Object reference not set to an instance of an object.
    Data: 
      DisableWatson = true
    Stack: 
        at Microsoft.SqlServer.Configuration.MsiExtension.ArpRegKey.CleanupPatchedProductRegistryInfo()
        at Microsoft.SqlServer.Configuration.MsiExtension.SetPatchInstallStateAction.ExecuteAction(String actionId)
        at Microsoft.SqlServer.Chainer.Infrastructure.Action.Execute(String actionId, TextWriter errorStream)
        at Microsoft.SqlServer.Setup.Chainer.Workflow.ActionInvocation.ExecuteActionHelper(TextWriter statusStream, ISequencedAction actionToRun, ServiceContainer context)
Anyway, after a couple of hours of messing around with it, I find a post on the Microsoft Forums that finally sheds some light on it.  Apparently the HKLM\Software\Microsoft\Windows NT\CurrentVersion\Uninstall registry key doesn't exist in the distributed virtual hard drive. Either adding this key or running an installer that creates it seems to fix the problem.

Hopefully, this will save others some debugging time and effort.

Friday, May 31, 2013

Configuring DEP on Windows Server 2008 R2 from a 32bit NSIS Installer - Revisited

Thanks to another blog, The Old New Thing, I found out there is another way to handle writing to the 64bit Registry from a 32bit program - without the needing to deal with file redirection.  Well, almost without the need for it. Starting with Windows Vista, a special alias %windir%\Sysnative was added to allow access to the System32 directory even when running a 32bit program.

Unfortunately, Windows XP and Windows Server 2003 don't have this feature; and since we still work with both of these versions of Windows, I cannot incorporate it into our code.  Anyway, since the original post is one of my more popular, I thought I would update the code.
var /GLOBAL NSISRegPath
StrCpy $NSISRegPath "SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers"

var /GLOBAL EXERegPath
StrCpy $EXERegPath "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers"

${IF} ${AtLeastWinVista}
    WriteRegStr HKLM "$NSISRegPath" "$INSTDIR\program.exe" "DisableNXShowUI"
    ${IF} ${RunningX64}
        ExecWait '$WINDIR\SysNative\reg.exe add "$EXERegPath" /v \
            "$INSTDIR\program.exe" /d "DisableNXShowUI"'
    ${ENDIF}
${ENDIF}

Note: Lines that end with a backslash represent long lines that have been wrapped.

Thursday, May 30, 2013

How To Encrypt SQL Server Connections - Part 2

In my previous post, I covered the steps necessary to enable and enforce encryption for all connections to Microsoft SQL Server. Now, this was nothing more than a step-by-step of what to do. It covered nothing about why or how important this is. Hopefully, I can impress upon the reader how important it is to encrypt connections. Considering the effort involved is extremely minimal - less than an hour of work - this should be the first step after installing SQL Server.

How Vulnerable is your Data?

Just to prove how easy it is to get access to the underlying data, I installed Wireshark on my dev machine (actually, I already had it installed - it's a great tool!), started it up and entered the filter "tds" (all lower case).  This shows me all the SQL statements going out of my machine and all the responses that come back.  While the responses are still TDS encoded, it isn't difficult to parse out the data - especially strings - just by looking at it.  This can give me access to a boatload of information.



The only thing SQL Server really tries to protect is SQL authentication. Starting with SQL Server 2005, the authentication process attempts an SSL handshake before authentication occurs. For the purposes of this post, I am assuming the client supports SSL authentication and the handshake is successful. Please note that this is very different than Extended Protection for Authentication, which is an additional layer on top of SSL encryption. Also, the encryption described here is limited to authentication. After authentication succeeds, the SSL channel is shut down and the connection reverts to an unencrypted state.

Anyway, when connecting to a SQL Server that isn't configured to use SSL, the server will search for an appropriate server authentication certificate (with accessible private key and matching the NetBIOS name or FQN ) in the Windows server certificate store and use that. If one is not found, a self-signed certificate is generated on-the-fly and is used specifically to protect the authentication portion of the connection. Any subsequent SQL statements or RPC calls are left unencrypted - and so are the results that are sent back.

Note: SQL Server actually checks for certificates at startup. If a certificate is added or becomes accessible after startup, SQL Server will not use it. Related, if a certificate is removed or becomes inaccessible after startup, SQL Server gracefully falls back to using a self-signed certificate where necessary.

Configuring SQL Server to Require Encryption

If you haven't read it yet, please, please, please, read my previous post. Getting SQL Server configured to require encryption is extremely simple. You can even use a self-signed certificate if need be. As long as connections to your SQL Server are unencrypted your data is vulnerable to anyone with a packet sniffer and access to your network.

Configuring a Data Source to Require Encryption

Configuring a Data Source to require encryption on the client side is also easy. Both the SQL Server driver and the SQL Server Native Client driver in the ODBC Data Source Administrator provides a checkbox for enabling encryption. On the fourth page of the wizard, simply check the "Use string encryption for data" setting and save the Data Source. All connections using that Data Source will now require encryption and perform client side validation of the server's certificate.

Note: This only tells the client to require encryption.  It does not enable encryption, something which must be done on the server.

SQL Server driver wizard

SQL Server Native Client driver wizard

What Validation Checks Occur on the Client Side?

Since validation occurs (I'm assuming) via WinVerifyTrust, all the bells and whistles that come with certificate validation in Windows occur. Although, there are a couple of small differences in how the SQL Server driver and the SQL Server Native Client driver handle client side validation. The SQL Server Native Client driver includes more detailed descriptions of any SSL errors, while the SQL Server driver simply returns a generic SSL failure error. In addition, the SQL Server Native Client driver includes a principal name check, which compares the server name in the Data Source (excluding the SQL Server instance name) against the common name in the server's certificate. 

For example, if the SQL Server is using a certificate with a FQN but a Data Source only specifies the NetBIOS name, the validation will fail.  So, sqlserver\myinstance will fail if the server's certificate has a common name of sqlserver.domain.com, but sqlserver.domain.com\myinstance will succeed. The same goes for specifying the IP address instead of a server name.

Digging a little deeper into the validation check and how WinVerifyTrust works, the certificate must be signed by a trusted root certificate or have intermediary certificates that are signed by a trusted root certificate. The certificate can be purchased from any Certificate Authority, can be created from an internal Windows Server Certificate Authority, can be created from an internal OpenSSL Certificate Authority (as long as the certificate authority roots are distributed to all Windows machines), or even - and this is a little wacky - a self-signed certificate (as long as the certificate is distributed as a trusted certificate - not necessarily trusted root certificate - to all Windows machines).

Yes.  Even a self-signed certificate will work. As long as the client machines trust the certificate, client validation will pass. If you don't enable encryption on the client side, the certificate doesn't even have to be trusted on the client machines.

Given that any SQL Server should only be internally accessible and client side validation is not required, there is not really even a reason to purchase an SSL certificate. Any certificate will do, as long as it fits the requirements for SQL Server.

Notes

Use Cases

For the purposes of this discussion, I have included a list below of all the applicable scenarios relating to client and server encryption settings.  Hopefully, this gives an idea of the underlying functionality and answer some common questions.
  • Client does not require encryption and server is configured to use a specific certificate but not require SSL. Both the SQL Server driver and the SQL Server Native Client driver will work without errors. The connection will be unencrypted, although the authentication process will use the server's certificate for encrypting the authentication handshake. Certificate validation is completely skipped on the client side.
  • Client does not require encryption and server is configured to require SSL. Both the SQL Server driver and the SQL Server Native Client driver will work without errors. The connection - authentication and otherwise - is encrypted using the server's certificate. Certificate validation is completely skipped on the client side.
  • Client requires encryption, but server doesn't have access to a certificate & key.  Both the SQL Server driver and the SQL Server Native Client driver will generate SSL errors and abort the connection. Since the client is demanding an SSL connection and the server only has a self-signed certificate it generated on-the-fly, the certificate validation fails.
  • Client requires encryption, but server is not configured to use SSL - although it has access to a certificate and key. Both the SQL Server driver and the SQL Server Native Client driver will work without errors (if the certificate passes validation checks on the client side). Since the client is demanding an SSL connection, the server does its best to find and use one. If more than one server authentication certificate and private key are accessible, you have no guarantee which certificate it will use.
  • Client requires encryption and server is configured to use SSL but not require it. Both the SQL Server driver and the SQL Server Native Client driver will work without errors (if the certificate passes validation checks on the client side). Since the client is demanding an SSL connection, all interaction is encrypted using the server's certificate. 
  • Client requires encryption and server is configured to require SSL. Both the SQL Server driver and the SQL Server Native Client driver will work without errors (if the certificate passes validation checks on the client side). Since the client is demanding an SSL connection, all interaction is encrypted using the server's certificate. 

Client Side Errors

During the course of my testing, there are three SSL errors that can occur on the client side.  Well, at least, three errors that I found when using the [Test Data Source...] button in the ODBC Data Source Administrator window.  There are probably many more. One from the SQL Server driver and two from the SQL Server Native Client driver. I have included screenshots below of each.


Generic SSL error from the SQL Server driver

Certificate Chain SSL error from the SQL Server Native Client driver

Principal Name SSL error from the SQL Server Native Client driver

Friday, April 19, 2013

How To Encrypt SQL Server Connections

Open the Logical Certificate Store for the Local Machine

  1. Start the Microsoft Management Console (mmc.exe)
  2. Under the File menu select Add/Remove Snap-in… to launch the Add or Remove Snap-ins window.
  3. Select Certificates from the list of Available Snap-ins.
  4. Click [Add >] to launch the Certificates snap-in configuration window.
  5. Select Computer account
  6. Click [Next]
  7. Select Local computer
  8. Click [Finish] to close the Certificates snap-in configuration window.
  9. Click [OK] to close the Add or Remove Snap-ins window.

Create a Certificate Request

  1. In the Microsoft Management Console, select Certificates (Local Computer) on the left side of the window. This will display a list of certificate categories.
  2. Right-click on the Personal on the right side of the window.
  3. Click on Create custom request under Add Tasks -> Advanced Operations to launch the Certificate Enrollment wizard.
  4. Click [Next] to go to the Certificate request page
  5. From the Template dropdown select (No template) Legacy key and leave the Request format as PKCS #10.
  6. Click [Next] to go to the Certificate information page.
  7. Expand the request by clicking on the clip_image004 button on the right side of the window.
  8. Click [Properties] to launch the Certificate Properties window.
  9. On the General tab, in the Friendly Name field, enter SSL Certificate for SQL Server. The friendly name can actually be anything, but it should be easily distinguishable as the certificate for the SQL Server.
  10. On the Subject tab, add the Common Name attribute with the actual name of the server. If the server is part of a Windows domain, it must be the fully qualified name of the server including the domain. If the server is not part of a Windows domain, it must be the NetBios name of the server.
  11. Also on the Subject tab, add any other attributes required by your Certificate Authority
  12. On the Private Key tab, change the Key Type to Exchange.
  13. Expand the Key options group on the Private Key tab and change the Key size to 2048.
  14. Expand the Key permissions group on the Private Key tab, check Use custom permissions.
  15. Click [Set permissions] to open the Permissions window.
  16. From here add the start up account for SQL Server service. This is the “Log on as” Windows account used by the SQL Server service. If necessary, this can be configured later.
  17. Click [OK] to close the Permissions window.
  18. Click [OK] to close the Certificate Properties window.
  19. Click [Next] in the Certificate Enrollment wizard to go to the Export page of the wizard.
  20. Enter (or browse for) a File Name for the certificate request export.
  21. Click [Finish].
Once complete, a Certificate Request file will be generated and can be turned into a Certificate by any certificate authority.

Generate Signed Certificate

<<insert magic here>>

I leave it up to the reader to decide how to generate a signed certificate.  Whether using a 3rd party certificate authority, a Windows certificate authority in Active Directory, or generating a self-signed certificate, all will work.  And, YES, a self-signed certificate will work - at least, at this stage in the game.  While the SQL Server can require encrypted connections, it is up to the client to decide whether certificate validations occur. This will be covered later in Part 2.


Import the Signed Certificate

  1. In the Microsoft Management Console, select Certificates (Local Computer) on the left side of the window.
  2. Right-click on the Personal on the right side of the window.
  3. Click on Import under All Tasks to launch the Certificate Import Wizard.
  4. Click [Next].
  5. Enter (or browse for) the signed certificate file generated by the certificate authority.
  6. Click [Next].
  7. Click [Next].
  8. Click [Finish] to close the Certificate Import Wizard.

Configure SQL Server to Use Encrypted Connections

  1. Start the SQL Server Configuration Manager.
  2. Expand SQL Server Network Configuration.
  3. Right-click on Protocols for MSSQLSERVER. If SQL Server is installed as an instance, MSSQLSERVER will actually be the name of the instance.
  4. From the context menu, click Properties to launch the Protocols for MSSQLSERVER Properties window.
  5. On the Flags tab, select Force Encryption and change the value to Yes.
  6. On the Certificate tab, select the SSL Certificate for SQL Server certificate from the dropdown. The certificates listed here will be listed by the Friendly Name on the certificate. If not specified, the Common Name will be listed instead.
  7. Click [OK] to close the Protocols for MSSQLSERVER Properties window.
  8. In the SQL Server Configuration Manager window, select SQL Server Services.
  9. Right-click on SQL Server (MSSQLSERVER) on the right side of the window.
  10. From the context menu, click Restart.

Notes


Minimum SSL Certificate Requirements

In some cases, the certificate may not appear in the SQL Server Configuration Manager window. Below are the absolute minimum requirements for a certificate to show in the window and for it to work with SQL Server.
  • The Template used must be (No template) Legacy key. This allows the Key Type to be changed.
  • Private key must have a Key Type of Exchange.
  • The Common Name (or Issued To) attribute must be the same as the server name. If part of a domain, this will be the fully qualified domain name of the machine. If not part of a domain, it will simply be the NetBios name of the machine.
  • The Enhanced Key Usage for the certificate must allow for Server Authentication (1.3.6.1.5.5.7.3.1).
  • The certificate & private key must be stored in the logical certificate store for the local computer.
  • The private key must be accessible to the Log On as Windows account for the SQL Server service. Even if inaccessible, the certificate will display, but the service will fail to start. 

SQL Server Fails to Restart

If permissions are not properly configured, the exception 0x8009030d may occur during SQL Server startup and be logged in the SQL Server ERRORLOG file. Because SQL Server has been configured to require encrypted connections, this will prevent the SQL Server service to start. The full text of the error will be similar to the following:
The server could not load the certificate it needs to initiate an SSL connection. It returned the following error: 0x8009030d. Check certificates to make sure they are valid.
To resolve the issue, modify the permissions for the certificates private key as covered in Configure Private Key Permissions. Alternately, reconfigure the SQL Server to use unencrypted connections as covered in Configure SQL Server to Use Unencrypted Connections.

Configure Private Key Permissions

If the SQL Server fails to start, check the permissions to the certificate’s private key.
  1. In the Microsoft Management Console, select Certificates (Local Computer) on the left side of the window.
  2. Double-click on the Personal category on the right side of the window.
  3. Double-click on Certificates on the right side of the window.
  4. Right-click on the certificate and click Manage Private Key…under All Tasks. This will open the Permissions window for the certificate’s private key.
  5. From here add the start up account for SQL Server service. This is the “Log on as” Windows account used by the SQL Server service.
  6. Click [OK] to close the permissions window.

Configure SQL Server to Use Unencrypted Connections

If all else fails, reverting to unencrypted connections may be the only way to restore access to the SQL Server.
  1. Start the SQL Server Configuration Manager.
  2. Expand SQL Server Network Configuration.
  3. Right-click on Protocols for MSSQLSERVER. If SQL Server is installed as an instance, MSSQLSERVER will actually be the name of the instance.
  4. From the context menu, click Properties to launch the Protocols for MSSQLSERVER Properties window.
  5. On the Flags tab, select Force Encryption and change the value to No.
  6. On the Certificate tab, click [Clear].
  7. Click [OK] to close the Protocols for MSSQLSERVER Properties window.
  8. In the SQL Server Configuration Manager window, select SQL Server Services.
  9. Right-click on SQL Server (MSSQLSERVER) on the right side of the window.
  10. From the context menu, click Restart.

Friday, July 06, 2012

Formatting XML using MSXML

Here's another useful bit of code used to generate formatted XML.  This particular code - in C++ form - will eventually be incorporated into the Platypus API Service, when logging API calls is enabled.

Function PrettyPrintXML(strXML)

    Dim objReader, objWriter
    Set objReader = CreateObject("MSXML2.SAXXMLReader.6.0")
    Set objWriter = CreateObject("MSXML2.MXXMLWriter.6.0")

    objWriter.indent = True
    objWriter.standalone = False
    objWriter.omitXMLDeclaration = False
    objWriter.encoding = "utf-8"

    Set objReader.contentHandler = objWriter
    Set objReader.dtdHandler = objWriter
    Set objReader.errorHandler = objWriter

    objReader.putProperty _
        "http://xml.org/sax/properties/declaration-handler", _
        objWriter
    objReader.putProperty _
        "http://xml.org/sax/properties/lexical-handler", _
        objWriter

    objReader.parse strXML

    PrettyPrintXML = objWriter.output

EndFunction



The credit for this actually goes to Daniel Rikowski on StackOverflow.  All I did was convert it into usable VBScript.

Wednesday, May 09, 2012

SQL Challenge - Part 1 - How to Find Contact Email Addresses in Platypus That May Cause an SMTP 550 Error.

Occasionally, if I am very lucky, someone will confront me with what I call the "SQL Challenge".  Way back in the day of the early 2000's, one of my coworkers and I would try to come up with ways to stretch our SQL knowledge.  Thanks to the "SQL Challenge", I am now capable of some fun acrobatics using just a little SQL.

The rules are fairly simple, once you have the actual challenge.  The SQL must be encapsulated into a single SQL statment - be that select, delete, insert or update.  It can have all the joins, subclauses, derived tables, and whatever else SQL provides, as long as it is included in a single statement. If inserting, updating or deleting, you are allowed a separate SQL statement for each table, but that is the only exception to the one statement rule. So, no cursors or loops. You get a bonuses for using ANSI SQL syntax and for how quickly you finish.

Today, I got one of those challenges.  Let's start with the givens.

1.  We have one or more email addresses stored in customer.email.
2.  These emails can be comma delmited or semicolon delimited.
3.  There may be spaces embedded before or after commas/semicolons.
4.  The emails may be hosted internally or by a 3rd party provider, or a combination of the two.
5.  A list of domains is stored in domain_item.domain.  These domains are hosted internally.
6.  A list of email addresses is stored in email_data.emailaddr.  These emails are hosted internally.
7.  When sending email messages to the email addresses stored on customer.email, if one of the email addresses is on a hosted domain but the email address is not in the list of hosted emails, the SMTP server will return a 550 (mailbox not found) error.

Now, the challenge.  Given all of the above, we want to find a list of customers with email addresses on customer.email that will generate a 550.  This means we want to find a list of email addresses attached to a hosted domain but are not a hosted email address.  After a little less than an hour, I made this...


select
/* The unique customer id */
    customer.id,
/* The delimited list of contact email addresses */
    customer.email,
/* A hosted domain matching one of the email addresses */
    domain_item.domain,
/* The number of email addresses in customer.email */
    len(customer.email) - len(replace(customer.email, '@', '')) as email_count,
/* The number of emails that match the hosted domain */
(
    len(replace(';'+replace(customer.email,' ','')+';',',', ';'))
    - len(replace(replace(';'+replace(customer.email,' ','')+';',',', ';'), '@'+domain_item.domain+';',''))
) / len('@' + domain_item.domain + ';') as domain_count,
/* The number of hosted email addresses that match the hosted domain  */
(
    select count(*)
      from email_data
     where customer.id = email_data.d_custid
       and ';'+replace(replace(customer.email,' ',''),',',';')+';' like '%;'+email_data.emailaddr+';%'
       and email_data.emailaddr + ';' like '%@' + domain_item.domain + ';'
) as match_count
  from customer
 inner join domain_item
    on replace(';'+replace(customer.email,' ','')+';',',',';') like '%@'+domain_item.domain+';%'
/* Strip the hosted domain from the delimited list */
/* Comparing the difference divided by the length of the hosted domain name */
/* This will let us know how many emails _should_ be hosted for this domain */
 where
(
    len(replace(';'+replace(customer.email,' ','')+';',',',';'))
    - len(replace(replace(';'+replace(customer.email,' ','')+';',',',';'), '@'+domain_item.domain+';', ''))
) / len('@'+domain_item.domain+';')
/* Compare the _should_ total against the _actual_ total */
/* If they don't match exactly, include the customer in the result */
<> (
/* Find the number of hosted email addresses for this customer */
/* That match one of the email addresses on the delimited list */
/* This will let us know how many emails are _actually_ hosted for this domain */
    select count(*)
      from email_data
     where customer.id = email_data.d_custid
       and replace(';'+replace(customer.email,' ','')+';',',',';') like '%;'+email_data.emailaddr+';%'
       and email_data.emailaddr+';' like '%@'+domain_item.domain+';'
)
go





Sunday, May 06, 2012

Base64 Encoding using MSXML


At the time we were developing the base64 encoder - covered in my last post, there already existed a quick and dirty way to perform base64 encoding/decoding through MSXML.  Even though we have decided not to pursue this for any Platypus development, it might benefit someone.  Here are examples for using MSXML for base64 encoding/decoding from within VBScript.

    Function Base64Encode (strData)
        Set objDocument = CreateObject("MSXML2.DOMDocument")
        Set objNode = objDocument.createElement("document")
        objNode.dataType = "bin.base64"
        objNode.nodeTypedValue = strData
        Base64Encode = objNode.text
    EndFunction
    
    Function Base64Decode (strData)
        Set objDocument = CreateObject("MSXML2.DOMDocument")
        Set objNode = objDocument.createElement("document")
        objNode.dataType = "bin.base64"
        objNode.text = strData
        Base64Decode = objNode.nodeTypedValue
    EndFunction



This is covered in more detail in Microsoft's Knowledge Base Article named "How To Create XML Documents with Binary Data in Visual Basic", along with examples for hex encoding and date encoding  (ISO-8601).

Saturday, May 05, 2012

Base64 Encoding in Platypus


Base64 encoding was originally designed for transmitting binary data over SMTP.  When email servers were first created, the SMTP protocol restricted the allowed characters to the first 7bits of ASCII's 8bit character set. Among other things, this prevented binary files from being sent over email.

Then came MIME and with it, base64 encoding, which converted binary files into readable text suitable for sending over email. With base64, for every 3 bytes of binary data input there are 4 bytes of encoded text returned. Compared with hex (base16) encoding, the other dominant encoding mechanism at the time, base64 encoding is much more efficient in terms of storage by lowering the amount of space needed for attachments from 2:1 (for hex) to 4:3 (for base64).  Since its creation, base64 encoding has been included in a variety of internet protocols, including SMTP authentication, HTTP authentication, XML and is used in various subsystems of the Platypus Billing System.

While Visual FoxPro - the base language for the Platypus Billing System - includes features for encoding and decoding using base64 through STRCONV(), that function does not follow the line length requirement in the RFC 2045 specification, which limits encoded lines to a maximum of 76 characters. For example, if data were encoded into 100 characters, two lines of encoded data would result.  The first line 76 characters, followed by CRLF and then the remaining 24 characters.

At first, this limitation was not a problem.  We used 3rd party ActiveX/COM libraries, named EncodeX and SmtpX from Mabry Software, for encoding email attachments and sending emails, respectively.  The other use of base64 within Platypus - our own attachment feature first included in Platypus v3 - did not have to interact with any external systems, so RFC 2045 compliance was not a requirement.

Eventually, we began work on creating a shared library that could wrap all our SMTP functionality for Platypus v5.  Up until this point, there were separate classes and libraries for the Platypus client and API.  For one, the Platypus client used ActiveX forms of the libraries, while the Platypus API used the COM forms.  Of course, this made maintaining that code doubly difficult and it was prone to inconsistent behavior between the Platypus client and API.  Because of that inconsistency, limitations in Visual FoxPro, and compatibility problems, we dropped the ActiveX form of SmtpX and completely dropped the EncodeX libraries.

Unfortunately, because FoxPro's STRCONV() function did not strictly follow MIME, sending attachments encoded using this function would often generate errors.  The strange thing was that these errors were not universal across all mail servers.  Some were more strict than others.  Anyway, to move forward, we decided to develop our own library for base64 encoding.

This was my first C++ project with Platypus.  Up until this point, my development experience - excluding school - was limited to Visual FoxPro, Visual Basic, and SQL.  After an excessive amount of research, we found a decent example in the public domain which performed encoding quickly enough and adapted it into a COM library using Visual C++ 6.0.  That library has been in use since that time for a majority of Platypus v4 and all of Platypus v5 and v6.  Not bad for a piece of of code 10 years old.  With the release of Platypus v7, that COM library has been phased out in favor of the Mailbee SMTP COM library, which handles encoding internally; but the old base64 COM library is still included in our installation sets as part of a fallback feature.

The original C++ source for the encoder has been phased out of our other components, as well; being replaced with a much improved library.  This new library takes advantage of Microsoft's SecureCRT (secure C runtime) guidelines to prevent buffer overflows, now includes an efficient decoder, and has been fuzz tested to ensure stability.  It is currently in use within the updated Mailpopper for re-encoding email attachments that have been decoded by the Mailbee POP3 COM library, which pulls emails into the Helpdesk features in Platypus.

Monday, April 30, 2012

Configuring DEP on Windows Server 2008 R2 from a 32bit NSIS Installer

If the title is hard to understand, let me just shorten it to this.  The woes of compatibility testing!

Included with the Platypus Billing System is a number of 3rd party ActiveX libraries.  Most of the time, these libraries are wondrous things.  Unfortunately, one, in particular, has a problem with Windows DEP.

Now, upon installation of the Platypus client, we can get around this by configuring the Application Compatibility settings of our executable to bypass DEP.  So, whenever the exe is launched, the OS will not trap DEP problems for our process.  We do all of this by simply writing to the appropriate location in the registry during the installation process.

HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers
All the way from Windows XP up to Windows Server 2008, we simply had to write to the registry.  It didn't even matter whether the OS is x86 or x64.  It has just worked.

Anyway, one of the more ingenious and infamous features from Windows Vista is redirection.  You see, whenever a 32bit application writes to HKEY_LOCAL_MACHINE in the registry on 64bit Windows, it is actually writing to HKEY_LOCAL_MACHINE\Software\Wow6432Node. So, whenever our software was installed, because the installation set is a 32bit application itself, when it writes to the registry, it is actually writing to the 32bit subset of the registry.  This includes those pesky Application Compatibility settings I mentioned.  Up until the latest version of Windows Server, the OS didn't care whether you configure the compatibility settings in the 32bit or 64bit registry.  It checked both when the process started.  Here's an example of code we used for configuring DEP*.
!include LogicLib.nsh
!include WinVer.nsh

var /GLOBAL NSISRegPath
StrCpy $NSISRegPath "SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers"

${IF} ${AtLeastWinXP}
    WriteRegStr HKLM "$NSISRegPath" "$INSTDIR\program.exe" "DisableNXShowUI"
${ENDIF}

Starting with Windows Server 2008 R2, Application Compatibility settings in the 32bit registry now appear to be ignored.  Meaning the Platypus client will now crash whenever our troublesome ActiveX library rears its head, because our installation set can't get to the 64bit registry.  Not easily, anyway.

To get around this problem, the creators of NSIS - who provide us with the software for making installation sets - were kind enough to take advantage of some features in 64bit Windows that allows us to get around redirection.  Mostly.

Unfortunately, whenever redirection is disabled, it only changes file redirection - not registry redirection.  Since our installation set is writing directly to the registry, disabling file redirection doesn't help us.  So, we have to find a way to write to the 64bit registry through file redirection.  This leads us to reg.exe - a nifty little utility that came with Windows XP that allows the registry to be accessed from the command line.  Since 64bit Windows has a 32bit reg.exe and a 64bit reg.exe, disabling file redirection should allow us to call the 64bit copy directly, which doesn't have that pesky 32bit registry limitation.

All we need to do is to check for 64bit Windows, disable file redirect, run reg.exe and then reenable file redirection. That gives us code that looks something like this, which actually does work*.
var /GLOBAL NSISRegPath
StrCpy $NSISRegPath "SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers"

var /GLOBAL EXERegPath
StrCpy $EXERegPath "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers"

${IF} ${AtLeastWinXP}
    WriteRegStr HKLM "$NSISRegPath" "$INSTDIR\program.exe" "DisableNXShowUI"
    ${IF} ${RunningX64}
        ${DisableX64FSRedirection}
        ExecWait '$SYSDIR\reg.exe add "$EXERegPath" /v "$INSTDIR\program.exe" /d "DisableNXShowUI"'
        ${EnableX64FSRedirection}
    ${ENDIF}
${ENDIF}

Finally! It works!  Total time of this endeavor is a little over an hour.  Now I'm off for more compatibility testing.  Windows 8 and Windows 2012 up next!

Tuesday, December 20, 2011

What's your backup solution?

Over the past twelve years I can remember three separate hard drive failures; two on work computers.  Those are the ones I can remember, anyway.  After the second one, I started getting serious about backups.

I had written a VBScript back in 2000 that I used to archive my library every day using 7-zip - a poor man's version of source control.  At first, I burned those archives to cd-rom every month or so.  In fact, I still have a few of those cd-roms floating around my home office.  Now, this didn't just include my archive; it included all the software I had dealt with - specifically the software used for integrations.  It was mostly RADIUS servers, email servers and a couple ftp servers - along with license keys, notes, sql, help files, contact names and phone numbers (basically everything needed to start over in case of catastrophic loss).

Eventually, I moved from writing software integrations to a proper software developer.  I was actually doing double duty then, writing both software integrations and adding new features for the Platypus client.  So, I got to enjoy the wonderous world of source control for Platypus, but integrations still stayed in zip's.

Of course, after my second hard drive crash, that backup solution just wasn't enough.  Cd-rom's take up physical space and required keeping track of where they all were.  They did contain sensitive information, after all.  So, I had to be careful with what I did with them.  Anyway, what I really wanted was a daily reusable system that I could use to back up everything - including source code not ready for check-in - that preferably didn't involve cd-rw's.  So, I bought my first USB drive - the Soyo Cigar Pro 128MB USB Flash Memory Drive for $72.94 on Feb 7, 2003.  I know it isn't much now, but back then it was an amazing thing - solid state engineering at its best.

At that point, I reconfigured my VBScript to zip and copy everything over to the flash drive, which I dutifully ran every day before going home.  I even used a combination where I would fill up the flash drive, and then copy everything to a cd-rom.  Actually, I only kept Friday backups in an effort to cut back on space, which meant I only had to burn a cd-rom every three to six months.  That was much more acceptable than a new cd-rom every month, and it made sure I had valid daily backups with a decent historical archive.

From there, I moved from using a desktop to a laptop for development and integrations fell by the wayside, but I still performed a daily backup of all the source code I was writing on a daily basis.  I moved to a 256MB flash drive, then to a 512MB and finally to a 2GB.  Since I didn't need integrations backed up, I dropped cd-rom's altogether and kept only source code backups for a month or two.  If I needed something older, it belonged in source control.

Today, all of that has changed.  I'm back to using a desktop and I don't use either flash drives or cd-roms for backups.  I now use a combination of RAID 1 and IDrive and VMWare for backups.  Sure, it's only RAID 1 and it is Matrix RAID, at that (instead of a hardware based RAID); but that was enough when one of the drives died suddenly.  Probably the best decision I made when getting my desktop from Dell was to get RAID pre-installed.  Since then, there haven't been any problems, but it is nice to know I am covered in case of catastrophe.  Even better, is the fact that I no longer have to put forth any effort to ensure my data is backed up.

Wednesday, August 03, 2011

Goodbye Firefox. It was nice knowing you.

I've finally given up on Firefox. It was a great browser for its time, but it has become unusable for me. Don't get me wrong, I would still love to use it, but it has become too cumbersome.

First a little background. This all takes place on my laptop, which has a mobile version of the i7 processor and 4GB of RAM; neither of which should be scoffed at. Plus, I am running the latest and greatest version of Firefox. Even with all that powerhouse, Firefox has been *pull my hair out* frustrating lately.

This problem has been going on for a weeks now, but today was my final breaking point. Firefox has been running for a few/many days. I'm not exactly sure how long - probably over a week. Regardless, I had over 30 tabs open. Every now and then - especially when closing or switching tabs - the browser would hang for around a minute. Really!? Closing a tab takes a minute? That can't be right. I have a freakin' i7 processor and oodles of RAM.

I checked Task Scheduler and Process Explorer. Nothing was taking up any significant amount of processor. Even Firefox was in single digits - as it should be with an i7 processor. Ok. Well, first thing, plugins. I disabled all but what I would consider essential plugins (I actually did this last week, but I wanted to see how it went before I made a rash judgement). The ones I kept are Adobe Acrobat, DivX, Quicktime, Flash, Silverlight and Adblock Plus. All well known and fairly stable plugins. Nothing wacky.

Disabling plugins had no effect. At least nothing I could notice. Ok. Maybe it's still one of the plugins. Checked Task Scheduler and Process Explorer and killed all the plugin-container.exe processes I could find. Still no effect. In some of these cases, there weren't even any plugin-container.exe processes running. This leaves one major thing that I can think of. Mozilla needs to learn an age old lesson for large applications. Shared memory is bad - meaning you absolutely have to have some sort of separation or isolation between components (Now this doesn't go for every application, but browsers definitely fit this bill). Linux learned the lesson ages and ages ago by saying "don't create monolithic applications". IntelMicrosoft, and Google learned this lesson. When will Mozilla?  Well, as I just found out, they are; but it's a long way from being done. (Plus, I'm already half way through with my rant. Why stop now?)

Browsers really are the becoming the end-all-be-all of applications. While browsers don't actually do everything, they do provide a gateway for anything to be done. Kind of like what having a modem was back in the 80's. If you had one, you had access to the amazing world of wasting time. Even if it was just AOL or Compuserve, you were "connected". The same goes for browsers. If you have one - a relatively modern one - you can do your banking, your shopping, talk face to face; you can even watch freakin' movies. You can do all that and more - even at the same time.  There's even a "programming" language built in.

Because they can dynamically do so much all at once, there is so much less room for error. If something goes wrong in one place, it shouldn't drag the system down with it. There's no reason for that. Firefox has released it's first set of features - OOPP (Out of Process Plugin) - that begins to deal with the problem.  This prevents 3rd party code from causing Firefox to crash, but that isn't far enough. Each tab should be its own process. This is my number one favorite feature of Chrome. Opening and closing tabs is nigh instantaneous. (Yes, I realize Chrome hides the window/tab and does the real shutdown behind the scenes, but it's a separate process and doesn't slow down the rest of the "application".)

Now, there is a down side to this.  A separate process potentially means longer start-up times, more processor time, more RAM and sharing data across tabs/processes has got to be a nightmare.  As a user, I don't really care about it. I just want the application to respond reasonably well, and Chrome's GUI does this better than any other browser out there. There's even a Firebug plugin for Chrome.  So, I may even give up on Firefox for dev purposes, excepting some QA test cases.

Now, as I found out about three-quarters of the way through my rant, Mozilla has the Electrolysis (or e10s) project under way; but it's a long way from being done.  When they finish, I'll reconsider switching back to Firefox; but until then, it's Chrome all the way.

Monday, July 18, 2011

Dynamic Linked Libraries (DLL) vs Static Libraries

We no longer use DLL's with the Platypus Billing System, except where absolutely necessary.  In some cases, with high level languages (such as Visual Basic 6 and Visual FoxPro) and 3rd party libraries (such as OpenSSL) written in C/C++, we have no other choice.  Plus there are ActiveX/COM libraries (such as MSXML, Mailbee, DBI, and Crystal Reports), which cannot be linked to statically.  But, in many cases, it can be avoided.

Without getting into an argument over which is better or worse, when the stability of a product is on the line, having DLL's creates another point of failure.  For that reason alone, it was more important for us to statically link our C/C++ code where possible.  Sure, the binaries may be larger and updates basically meant a re-install; but it has been well worth these minor difficulties. 

Since the switch to Visual C++ 2005 and static linking back in 2009, the number of C++ dependency issues we have encountered are still in the single digits - and that is only because of ActiveX/COM.  Just to relay the point, here a few of the specific cases I have encountered over the past few years.

Case #1: PHP vs Pidgin

Both PHP and Pidgin include a spell check library - Aspell - in the form of aspell-15.dll.  Since the web pages for our product are written in PHP, I - of course - need PHP installed on my dev machine.  Also, I have Pidgin installed for chatting with technical support - or anyone else at work when a face-to-face confab is not required.

Now, normally these two products are not in conflict and everything works swimmingly.  But, one day, I decided to grab one of the newer - more stable, secure, and compiled in VC 2008 - PHP editions from the PHP for Windows.  Everything worked fine at first.  Then, as happens, I needed to reboot.  Afterwards, Pidgin crashed every time I tried to start it up.

After yanking my hair out using Dependency Walker and Process Monitor, I finally figured out that it was because of Aspell.  I renamed aspell-15.dll in the PHP folder and everything started working again.  Because PHP was in the system path, Pidgin was loading the PHP version of the dll instead of the one in the Pidgin folder.  It shouldn't have done this, and I could find no logical reasoning for it, but that is what was happening.

Regardless, I didn't have the time to look into it further.  I knew the cause and could bypass it.  Spell check is nice, but completely unnecessary for IM.  So, I uninstalled Pidgin, and reinstalled it without the spell check feature.  Problem solved - or, at least, dealt with.

Case #2: PHP vs OpenSSL

With our product, we include a COM DLL (tu_app.dll) for interacting with the Tucows Email Service.  This COM library was written in Visual C++ 6.0 and was linked to some severely old versions of the OpenSSL libraries.  Again, because I decided to go mucking about with my installation of PHP, I broke yet another thing on my dev machine.

I was performing some fixes for our integration with Tucows Email and had to do some unit tests.  Every time I tried to load the COM object, the program would crash spectacularly.  After some more hair pulling, I traced it down to the OpenSSL libraries.  I replaced the DLL's installed by PHP with the one included in our installation set and it started working again.

Problem solved?  No, definitely not.  While crashng my IM client is one thing, the possibility that someone could install a special version of PHP on the same machine as our product - which is normally the case - is another.  Only the older versions of the OpenSSL libraries would work with our COM library.

Those OpenSSL libraries were ancient and would not pass any scrutiny when it came to PA-DSS.  Plus, having our product crash because we required using outdated and insecure versions of the OpenSSL libraries was completely unacceptable.  So, we ported the code from Visual C++ 6.0 to Visual C++ 2005 and statically linked to OpenSSL.  Now, the problem was solved.

Case #3: ATL Vulnerability

When this problem first came out, I was working on a separate major rewrite/port of our C++ code - specifically a Windows service for hosting our API - from Visual C++ 6.0 to Visual C++ 2005.  I had everything working.  It was beautiful and simple code, it compiled without warnings, it had no memory leaks, and it passed every test I threw at it.

Next, came compatibility testing.  After making an installation set for our product, I started testing on all the operating systems we supported - Windows 2000 up to Windows Vista/2008.  Upon start up on Windows Vista and 2008, the service immediately crashed.  It worked fine on Windows 2000 and XP.

I checked the Eventlog and found a side-by-side dependency error.  Considering this was my first venture into something newer than VC6, I wasn't fully competent with Application Manifests at the time.  So, I had no idea what this error really meant.

I checked the installation set to make sure it included the Visual C++ runtime - and it did.  I checked the installation log (and Add/Remove Programs) to make sure it installed - and it did.  After even more hair pulling, I found out about the ATL update.

The worst part was, no installation set for the Visual C++ runtime existed - which included the ATL fix.  There is now, but there wasn't at the time (or I just suck at using a search engine).  So, I could try to install the runtime files manually, but that involved a huge amount of effort and testing on all those OS's.  Especially, for something that had to be done that night.  I needed to finish my testing so we could release the next day (and possibly grab some sleep that night).  Plus, I had no idea what DLL's to install or where to install them or how to deal with WinSxS from a NSIS installation set.

So, my only option was to switch to static linking.  No more dependencies.  No unnecessary points of failure.  Or more simply, no more DLL Hell.  Finally, problem solved and a few hours sleep before the release.

Case #4: DLL Preloading Vulnerability

This is a generic definition of case #1.  A DLL from an unexpected location is loaded instead of the intended one.  While, case #1 wasn't officially an attack, it did crash a program and caused me a couple hours of unneeded stress.

Now, in cases like this, there are officially two ways to deal with it.  First, you can mitigate the attack surface by using SetDLLDirectory.  This limits the possibility of an attack, but doesn't eliminate it as I found out.  The second way is to do away with the problem altogether by static linking.  I am a firm believer that elimination is far better than mitigation - especially considering it requires no actual code change and reduces the amount of installation set testing required.

Sunday, July 17, 2011

Application Manifests & Visual Basic 6

Embedding Application Manifests in Visual Basic 6 binaries is really easy.  Microsoft even wrote a command line utility just for this purpose.  Well, not for VB6 but for binaries in general.  The Manifest Tool (mt.exe), which is included in both Visual Studio and the Windows SDK is extremely simple to use.  The best part is that it handles any necessary padding and can update just about any binary with no fuss.

Here's an example command line using the naming convention of Visual Studio 2005, where the manifest filename contains the program name with ".intermediate.manifest" appended. 

mt.exe -nologo -manifest "program.exe.intermediate.manifest" -outputresource:"program.exe;#1

And now for a story...

When we were first confronted with the need for manifests - specifically for triggering UAC prompts in our configuration tools written in VB6 - I performed my due diligence.  I researched the topic thoroughly, I took examples of the manifests provided by Microsoft, and I tested on each and every Windows OS we supported - Windows 2000 all the way up to Windows Vista/2008.

The one thing I couldn't find was a simple way to embed the manifest in those executables, that could be easily automated.  The articles I read covered GUI tools like XN Resource Editor and Resource Hacker, writing my own C/++ program using UpdateResource, a long winding route using the Resource Compiler (rc.exe) or finally just leaving the manifest as a separate file.

Even though manifests had been around since Windows XP, there wasn't a single article I could find that even mentioned the Manifest Tool. Even in the Microsoft articles I have found, there is never any mention of VB6 and the Manifest Tool together.  Of course, VB6 was considered legacy by the time Application Manifests came out; so, while frustrating, I can't really blame them.  I can blame my search engine skills, but that's no fun.

Anyway, all but the last option were complicated, convoluted, or required too much effort.  We, of course, finally settled on that last option - using external manifests - out of necessity to get something out the door.  It wasn't until we started migrating code from Visual C++ 6.0 over to Visual Studio 2005 that I noticed the mt.exe command line in the build log, which was over a year and half later.  Now, along with code signing, through signtool.exe the Manifest Tool is included in much of our automated build process, and I am much happier for discovering it.

Saturday, July 09, 2011

Taxes Are Hard (Texas Tax Edition) - Part 2

Taxes are hard; and as Texas has proved so far, Texas taxes are extraordinarily difficult.  Even after all the specifics laid out in part 1 of this post, a great deal of information is left before I can even begin talking about what it actually means.

State Tax
Regardless of whether something is sold by a company in Texas or is sold to a customer in Texas, the state tax of 6.25% always applies.  This is perhaps the simplest feature of Texas taxes.  If it weren't for the $25 internet access exemption or the 20% web development/information service exemption, Texas state taxes would be easy. 

Local Sales Tax
Beyond the state tax, the next type of tax that must be calculated is the local sales tax.  This tax is based on the location of the seller's place of business.

Local Use Tax
Next, after both the state tax and the local sales tax are calculated comes the local use tax.  This tax is based on the location of the customer or where the customer receives the goods and services.

Further Complications of Local Taxes
Both the local sales and local use taxes are further broken down into four different locale types: city, county, special purpose districts and transit.  So, combined, this creates nine - count them nine - different types of taxes that go into the calculation.

Next, after all that breakdown, city tax rates are different for each city, county tax rates are different for each county and so on.  All combined, the local tax rate - for both local sales and local use taxes - cannot exceed 2%.  This limiting factor of 2% works on a priority basis, adding each subsequent type to the total until the 2% is reached.  If adding one of the local tax rates exceeds the 2% limit, only the amount necessary to reach the 2% limit is used.  The order of the local tax rates is as follows.
  1. local city sales tax
  2. local county sales tax
  3. local special purpose district sales tax
  4. local transit sales tax
  5. local city use tax
  6. local county use tax
  7. local special purpose district use tax
  8. local transit use tax
In addition, city taxes do not apply if outside of the city limits.  So, a company located outside of the city limits, the local city sales tax will not apply; and if a customer is located outside of the city limits, the local city use tax will not apply.

Also, while the terms "sales" and "use" imply a different set of rules or percentages, they actually don't. Local taxes rates are the same for both sales and use. Plus, when reporting local taxes, they are done based on the different locale types: city, county, special purpose district, and transit.  Beyond the initial calculation, the terms sales and use are not applied (at least, to my knowledge).

Finally, along those same lines, duplicates are ignored.  Local sales taxes for the seller are calculated; then, local use taxes for the customer are calculated - ignoring any the local use tax for duplicates.  For example, if both the company and the customer are located within the same county, the local county sales tax will apply but the local county use tax will not; or more apply put, the local city tax is only calculated once.

The information provided in this article is just a summary of the Texas local tax calculations.  The Window on State Government web site provides an article - and is the basis for this post - which covers may different scenarios with specific examples for each in the February 2009 Local Sales and Use Tax Bulletin - Guidelines for Collecting Local Sales and Use Tax.

Taxes Are Hard (Texas Tax Edition) - Part 1

Taxes are hard, and along the lines of "don't mess with Texas", internet taxes in Texas go above and beyond the norm.  The basics of Texas internet taxes are as follows.

Note: Because of the complications of Texas taxes, this article is broken down into several manageable posts. The first two, of which, specifically cover a summary of rules for Texas taxes.

Internet Access Service
Web Development and Information Services
Seems simple, doesn't it?  Now for the semantics.
Continued in Part 2.