World of Whatever

A blog about SQL Server, SSIS, C# and whatever else I happen to be dealing with in my professional life.

Find ramblings

Tuesday, August 15, 2017

Biml build Database collection nodes

Biml build Database collection nodes aka what good are Annotations

In many of the walkthroughs on creating relational objects via Biml, it seems like people skim over the Databases collection. There's nothing built into the language to really support the creation of database nodes. The import database operations are focused on tables and schemas and assume the database node(s) have been created. I hate assumptions.

Connections.biml

Add a biml file to your project to house your connection information. I'm going to create two connections to the same Adventureworks 2014 database.

<Biml xmlns="http://schemas.varigence.com/biml.xsd">
    <Connections>
        <OleDbConnection Name="Aw2014" ConnectionString="Provider=SQLNCLI11;Server=localhost\dev2016;Initial Catalog=AdventureWorks2014;Integrated Security=SSPI;" />
        <OleDbConnection Name="Aw2014Annotated" ConnectionString="Provider=SQLNCLI11;Server=localhost\dev2016;Initial Catalog=AdventureWorks2014;Integrated Security=SSPI;">
            <Annotations>
                <Annotation Tag="DatabaseName">AdventureWorks2014</Annotation>
                <Annotation Tag="ServerName">localhost\dev2016</Annotation>
                <Annotation Tag="Provider">SQLNCLI11</Annotation>
            </Annotations>
        </OleDbConnection>
    </Connections>
</Biml>

The only material difference between the first and second OleDbConnection node is the declaration of Annotations on the second instance. Annotations are free form entities that allow you to enrich your Biml with more metadata. Here I've duplicated the DatabaseName, ServerName and Provider properties from my connection string. As you'll see in the upcoming BimlScript, prior proper planning prevents poor performance.

Databases.biml

The Databases node is a collection of AstDatabaseNode. They require a Name and a ConnectionName to be valid. Let's look at how we can construct these nodes based on information in our project. The first question I'd ask is what metadata do I readily have available? My Connections collection - that is already built out so I can enumerate through the items in there to populate the value of the ConnectionName. The only remaining item then is the Name for our database. I can see three ways of populating it: parsing the connection string for the database name, instantiating the connection manager and querying the database name from the RDBMS, or pulling the database name from our Annotations collection.

The basic approach would take the form

<Databases>
<#
    string databaseName = string.Empty;

    foreach(AstOleDbConnectionNode conn in this.RootNode.OleDbConnections)
    {
        databaseName = "unknown";
        // Logic goes here!
#>
        <Database Name="<#= conn.Name #>.<#= databaseName#>" ConnectionName="<#= conn.Name #>" />
<#
    }
#>
</Databases>

The logic we stuff in there can be as simple or complex as needed but the end result would be a well formed Database node.

Parsing

Connection strings are delimited (semicolon) key value pairs unique to the connection type. In this approach we'll split the connection string by semicolon and then split each resulting entity by the equals sign.

         // use string parsing and prayer
         try
         {
            string [] kVpish;
            KeyValuePair<string, string> kvp ;
            // This is probably the most fragile as it needs to take into consideration all the
            // possible connection styles. Get well acquainted with ConnectionStrings.com
            // Split our connnection string based on the delimiter of a semicolon
            foreach (var element in conn.ConnectionString.Split(new char [] {';'}))
            {
                kVpish = element.Split(new char[]{'='});
                if(kVpish.Count() > 1)
                {
                    kvp = new KeyValuePair<string, string>(kVpish[0], kVpish[1]);
                    if (String.Compare("Initial Catalog", kvp.Key, StringComparison.InvariantCultureIgnoreCase) == 0)
                    {
                        databaseName = kvp.Value;
                    }
                }
            }
         }
         catch (Exception ex)
         {
             databaseName = string.Format("{0}_{1}", "Error", ex.ToString());
         }

The challenge with this approach is that it's a fragile approach. As ConnectionStrings.com can attest, there are a lot of ways of constructing a connection string and what it denotes as the database name property.

Query

Another approach would be to instantiate the connection manager and then query the information schema equivalent and ask the database what the database name is. SQL Server makes this easy

            // Or we can use database connectivity
            // This query would need to have intelligence built into it to issue correct statement
            // per database. This works for SQL Server only
            string queryDatabaseName = @"SELECT db_name(db_id()) AS CurrentDB;";
            System.Data.DataTable dt = null;
            dt = ExternalDataAccess.GetDataTable(conn, queryDatabaseName);
            foreach (System.Data.DataRow row in dt.Rows)
            {
                databaseName = row[0].ToString();
            }

The downside to this is that, much like parsing the connection string it's going to be provider specific. Plus, this will be the slowest due to the cost of instantiating connections. Not to mention ensuring the build server has all the correct drivers installed.

Annotations

"Prior proper planning prevents poor performance" so let's spend a moment up front to define our metadata and then use Linq to extract that information. Since we can't guarantee that a connection node has an Annotation tag named DatabaseName, we need to test for the existence (Any) and if we find one, we'll extract the value.

            if (conn.Annotations.Any(an => an.Tag=="DatabaseName"))
            {
                // We use the Select method to pull out only the thing, Text, that we are interested in
                databaseName = conn.Annotations.Where(an => an.Tag=="DatabaseName").Select(t => t.Text).First().ToString();
            }

The downside to this approach is that it requires planning and a bit of double entry as you need to keep the metadata (annotations) synchronized with the actual connection string. But since we're automating kind of people, that shouldn't be a problem...

Databases.biml

Putting it all together, our Databases.biml file becomes

<Biml xmlns="http://schemas.varigence.com/biml.xsd">
<#@ template tier="10" #>
    <Databases>
    <#
    foreach(AstOleDbConnectionNode conn in this.RootNode.OleDbConnections)
    {
        string databaseName = "unknown";
        
        // Test whether the annotation collection contains a tag named DatabaseName
        if (conn.Annotations.Any(an => an.Tag=="DatabaseName"))
        {
            // We use the Select method to pull out only the thing, Text, that we are interested in
            databaseName = conn.Annotations.Where(an => an.Tag=="DatabaseName").Select(t => t.Text).First().ToString();
        }
        else
        {
            // No annotations found
            bool useStringParsing = true;
            if (useStringParsing)
            {
         // use string parsing and prayer
         try
         {
            string [] kVpish;
            KeyValuePair<string, string> kvp ;
            // This is probably the most fragile as it needs to take into consideration all the
            // possible connection styles. Get well acquainted with ConnectionStrings.com
            // Split our connnection string based on the delimiter of a semicolon
            foreach (var element in conn.ConnectionString.Split(new char [] {';'}))
            {
                kVpish = element.Split(new char[]{'='});
                if(kVpish.Count() > 1)
                {
                    kvp = new KeyValuePair<string, string>(kVpish[0], kVpish[1]);
                    if (String.Compare("Initial Catalog", kvp.Key, StringComparison.InvariantCultureIgnoreCase) == 0)
                    {
                        databaseName = kvp.Value;
                    }
                }
            }
         }
         catch (Exception ex)
         {
             databaseName = string.Format("{0}_{1}", "Error", ex.ToString());
         }
            }
            else
            {
                // Or we can use database connectivity
                // This query would need to have intelligence built into it to issue correct statement
                // per database. This works for SQL Server only
                string queryDatabaseName = @"";
                System.Data.DataTable dt = null;
                dt = ExternalDataAccess.GetDataTable(conn, queryDatabaseName);
                foreach (System.Data.DataRow row in dt.Rows)
                {
                    databaseName = row[0].ToString();
                }
            }
            
        }
        
    #>
        <Database Name="<#= conn.Name #>.<#= databaseName#>" ConnectionName="<#= conn.Name #>" />
        <#
        }
        #>
    </Databases>
</Biml>

And that's what it takes to use BimlScript to build out the Database collection nodes based on Annotation, string paring or database querying. Use Annotations to enrich your Biml objects with good metadata and then you can use it to simplify future operations.

Friday, July 28, 2017

Generate TSQL time slices

I had some log data I wanted to bucket into 15 second time slices and I figured if I have solved this once, I will need to do it again so to the blog machine! This will use the LEAD, TIMEFROMPARTS and ROW_NUMBER() to accomplish this.
SELECT
    D.Slice AS SliceStart
,   LEAD
    (
        D.Slice
    ,   1
        -- Default to midnight
    ,   TIMEFROMPARTS(0,0,0,0,0)
    )
    OVER (ORDER BY D.Slice) AS SliceStop
,   ROW_NUMBER() OVER (ORDER BY D.Slice) AS SliceLabel
FROM
(
    -- Generate 15 second time slices
    SELECT 
        TIMEFROMPARTS(A.rn, B.rn, C.rn, 0, 0) AS Slice
    FROM
        (SELECT TOP (24) -1 + ROW_NUMBER() OVER (ORDER BY(SELECT NULL)) FROM sys.all_objects AS AO) AS A(rn)
        CROSS APPLY (SELECT TOP (60) (-1 + ROW_NUMBER() OVER (ORDER BY(SELECT NULL))) FROM sys.all_objects AS AO) AS B(rn)
        -- 4 values since we'll aggregate to 15 seconds
        CROSS APPLY (SELECT TOP (4) (-1 + ROW_NUMBER() OVER (ORDER BY(SELECT NULL))) * 15  FROM sys.all_objects AS AO) AS C(rn)
) D

That looks like a lot, but it really isn't. Starting from the first inner most query, we select the top 24 rows from sys.all_objects and use the ROW_NUMBER function to generate us a monotonically increasing set of values, thus 1...24. However, since the allowable range of hours is 0 to 23, I deduct one from this value (A). I repeat this pattern to generate minutes (B) except we get the top 60. Since I want 15 second intervals, for the seconds query, I only get the top 4 values. I deduct one so we have {0,1,2,3} and then multiply by 15 to get my increments (C). If you want different time slices, that's how I would modify this pattern.

Finally having 3 columns of numbers, I use TIMEFROMPARTS to build a time data type with the least amount of precision and present that as "Slice" and encapsulate that a derived table (D). Running that query gets me a list of periods but I don't know what the end period is.

We can calculate the end period by using the LEAD function. I present my original Slice as SliceStart. I then use the LEAD function to calculate the next (1) value based on the Slice column. In the case of 23:59:45, the "next" value in our data set is NULL. To address that scenario, we pass in a the default value for the lead function.

Friday, June 30, 2017

Rename default constraints

This week I'm dealing with synchronizing tables between environments and it seems that regardless of what tool I'm using for schema compare, it still gets hung up on the differences in default names for constraints. Rather than fight that battle, I figured it'd greatly simplify my life to systematically rename all my constraints to non default names. The naming convention I went with is DF__schema name_table name_column name. I know that my schemas/tables/columns don't have spaces or "weird" characters in them so this works for me. Use this as your own risk and if you are using pre-2012 the CONCAT call will need to be adjusted to classic string concatenation, a.k.a. +
DECLARE @query nvarchar(4000);
DECLARE
    CSR CURSOR
FAST_FORWARD
FOR
SELECT
    CONCAT('ALTER TABLE ', QUOTENAME(S.name), '.', QUOTENAME(T.name), ' DROP CONSTRAINT [', DC.name, '];', CHAR(10)
    , 'ALTER TABLE ', QUOTENAME(S.name), '.', QUOTENAME(T.name)
    , ' ADD CONSTRAINT [', 'DF__', (S.name), '_', (T.name), '_', C.name, ']'
    , ' DEFAULT ', DC.definition, ' FOR ', QUOTENAME(C.name)) AS Query
FROM
    sys.schemas AS S
    INNER JOIN
        sys.tables AS T
        ON T.schema_id = S.schema_id
    INNER JOIN
        sys.columns AS C
        ON C.object_id = T.object_id
    INNER JOIN
        sys.default_constraints AS DC
        ON DC.parent_object_id = T.object_id
        AND DC.object_id = C.default_object_id
WHERE
    DC.name LIKE 'DF__%'
    AND DC.name <> CONCAT('DF__', (S.name), '_', (T.name), '_', C.name);

OPEN CSR
FETCH NEXT FROM CSR INTO @query;
WHILE @@FETCH_STATUS = 0
BEGIN
    BEGIN TRY
        EXECUTE sys.sp_executesql @query, N'';
    END TRY
    BEGIN CATCH
        PRINT ERROR_MESSAGE()
        PRINT @query;
    END CATCH
    FETCH NEXT FROM CSR INTO @query;
END
CLOSE CSR;
DEALLOCATE CSR;

Wednesday, March 22, 2017

Variable scoping in TSQL isn't a thing

It's a pop quiz kind of day: run the code through your mental parser.

BEGIN TRY
    DECLARE @foo varchar(30) = 'Created in try block';
    DECLARE @i int = 1 / 0;
END TRY
BEGIN CATCH
    PRINT @foo;
    SET @foo = 'Catch found';
END CATCH;

PRINT @foo;
It won't compile since @foo goes out of scope for both the catch and the final line
It won't compile since @foo goes out of scope for the final line
It prints "Created in try block" and then "Catch found"
I am too fixated on your form not having a submit button

Crazy enough, the last two are correct. It seems that unlike every other language I've worked with, all variables are scoped to the same local scope regardless of where in the script they are defined. Demo the first

Wanna see something even more crazy? Check this version out

BEGIN TRY
    DECLARE @i int = 1 / 0;
    DECLARE @foo varchar(30) = 'Created in try block';
END TRY
BEGIN CATCH
    PRINT @foo;
    SET @foo = 'Catch found';
END CATCH;

PRINT @foo;

As above, the scoping of variables remains the same but the forced divide by zero error occurs before the declaration and initialization of our variable @foo. The result? @foo remains uninitialized as evidenced by the first print in the Catch block but it still exists/was parsed to instantiate the variable but not so the value assignment. Second demo

What's all this mean? SQL's weird.

Wednesday, March 8, 2017

Getting Windows share via python

Windows network shares with python

Backstory

On a daily basis, we receive data extracts from a mainframe. They provide a header and data file for whatever the business users want to explore. This client has lots of old data ferreted away and they need to figure out if there's value in it. Our job is to consume the header files to drop and create tables in SQL Server and then populate with actual data. The SQL is trivial -

CREATE TABLE Foo (Col1 varchar(255), ColN varchar(255)); 
BULK INSERT Foo FROM 'C:\sourceFile.csv' WITH (FIRSTROW=1,ROWTERMINATOR='\n',FIELDTERMINATOR='|');

Let's make this harder than it should be

Due to ... curious permissions and corporate politics, the SQL Server service account could only read files via a network share (\\Server\Share\Input\File.csv), never you no mind the fact that path was really just D:\Share\Input. A local drive but permissions were such that we couldn't allow the service account to read from the drive. Opening a network share up and letting the account read from that - no problem.

What are the shares?

That's an easy question to answer, because I knew the answer. net share. I coded up a simple parser and all was well and good until I ran it on the server which had some really, long share names and/or the Resource was long. Like this

Share name   Resource                        Remark

-------------------------------------------------------------------------------
C$           C:\                             Default share
IPC$                                         Remote IPC
ADMIN$       C:\WINDOWS                      Remote Admin
DEV2016      \\?\GLOBALROOT\Device\RsFx0410\\DEV2016
                                             SQL Server FILESTREAM share
RidiculouslyLongShareName
             C:\users\bfellows\Downloads
The command completed successfully.
Super. The output of net share is quasi fixed width and it just wraps whatever it needs to onto the next line/column.

What are the sharesv2

Windows Management Instrumentation to the rescue! WMIC.exe /output:stdout /namespace:\\root\cimv2 path Win32_Share GET Name, Path That's way better, sort of

Name                       Path
ADMIN$                     C:\WINDOWS
C$                         C:\
DEV2016                    \\?\GLOBALROOT\Device\RsFx0410\\DEV2016
IPC$
RidiculouslyLongShareName  C:\users\bfellows\Downloads
Originally, that command ended with GET * which resulted in a lot more information being returned than I needed. The devil though, is that the output width is dependent upon the source data. If I remove the network share for my RidiculouslyLongShareName and rerun the command, I get this output
Name     Path
ADMIN$   C:\WINDOWS
C$       C:\
DEV2016  \\?\GLOBALROOT\Device\RsFx0410\\DEV2016
IPC$
Users    C:\Users
It appears to be longest element +2 spaces for this data but who knows what the real encoding rule is. The good thing is, that while variable, the header rows gives me enough information to slice up the data as needed.

This needs to run anywhere

The next problem is that this process in Dev runs on D:\Share but in QA is is on the I:\datafiles\instance1 and oh by the way, there are two shares for the I drive \\qa\Instance1 (I:\datafiles\instance1) and \\qa\datafiles. (I:\datafiles) In the case where there are multiple shares, if there's one for the folder where the script is running, that's the one we want. Otherwise, it's probably the "nearest" path which I interpreted as having the longest path.

Code good

Here's my beautiful, hacky python. Wherever this script runs, it will then attempt to render the best share path to the same location.

import os
import subprocess

def _generate_share_dictionary(headerRow):
    """Accepts a variable width, white space delimited string that we attempt
        to divine column delimiters from. Returns a dictionary of field names
        and a tuple with start/stop slice positions"""

    # This used to be a more complex problem before I realized I didn't have
    # to do GET * in my source. GET Name, Path greatly simplifies
    # but this code is generic so I keep it as is

    header = headerRow
    fields = header.split()
    tempOrds = {}
    ords = {}
    # Populate the temporary ordinals dictionary with field name and the
    # starting, zero based, ordinal for it.
    # i.e. given
    #Name     Path
    #01234567890123456789
    # we would expect Name:0, Path:9
    for field in fields:
        tempOrds[field] = headerRow.index(field)

    # Knowing our starting ordinal positions, we will build a dictionary of tuples
    # that contain starting and ending positions of our fields
    for iter in range(0, len(fields) -1):
        ords[fields[iter]] = (tempOrds[fields[iter]], tempOrds[fields[iter+1]])
        
    # handle the last element
    ords[fields[-1]] = (tempOrds[fields[-1]], len(headerRow))

    return ords

def get_network_shares():
    """Use WMIC to get the full share list. Needed because "net share" isn't parseable"""
    _command = r"C:\Windows\System32\wbem\WMIC.exe /output:stdout /namespace:\\root\cimv2 path Win32_Share GET Name, Path"
    #_command = r"C:\Windows\System32\wbem\WMIC.exe /output:stdout /namespace:\\root\cimv2 path Win32_Share GET *"
    _results = subprocess.check_output(_command, shell=True).decode('UTF-8')

    _headerRow = _results.splitlines()[0]
    headerOrdinals = _generate_share_dictionary(_headerRow)

    _shares = parse_network_shares_name_path(headerOrdinals, _results)
    return _shares

def parse_network_shares_name_path(header, results):
    """Rip apart the results using our header dictionary"""
    _shares = {}
    #use the above to slice into our results
    #skipping first line since it is header
    for _line in results.splitlines():
        if _line:
            _shares[_line[header["Name"][0]: header["Name"][1]].rstrip()] = _line[header["Path"][0]: header["Path"][1]].rstrip()
    return _shares
    

def translate_local_path_to_share(currentPath):
    """Convert the supplied path to the best match in the shares list"""
    shareName = ""
    defaultShare = ""
    shares = get_network_shares()

    # find the first share match
    if currentPath in shares.values():
        shareName = [key for key, value in shares.items() if value == currentPath][0]
    else:
        #see if we can find a partial match
        # favor longest path
        best = ""
        pathLength = 0
        for share, path in shares.items():
            # path can be empty due to IPC$ share
            if path:
                # Is the share even applicable?
                if path in currentPath:
                    # Favor the non default/admin share (DriveLetter$)
                    if share.endswith('$'):
                        defaultShare = currentPath.replace(path[:-1], share)
                    else:
                        if len(path) > pathLength:
                            shareName = currentPath.replace(path[:-1], share)

        # No other share was found
        if (defaultShare and not shareName):
            shareName = defaultShare
    x = os.path.join(r"\\" + os.environ['COMPUTERNAME'], shareName)
    print("Current folder {} maps to {}".format(currentPath, x))
    
    return os.path.join(r"\\" + os.environ['COMPUTERNAME'], shareName)


def main():
    
    current = os.getcwd()
    #current = "C:\WINDOWS"
    share = translate_local_path_to_share(current)
    print("{} aka {}".format(current, share))

if __name__ == "__main__":
    main()

Takeaways

You probably won't ever need all of the above code to be able to swap out a local path for a network share using python but by golly if you do, have fun. Also, python is still my most favorite language, 14 years running.

Thursday, February 23, 2017

Biml Database Inspection

Biml Database Inspection

Importing tables via Biml

I've mentioned how using Biml to reverse engineer a very large database was the only option and there is plenty of great material in the community about how to do this but one thing I kept stumbling over was the using the import methods to build the above Biml always seemed to fail somewhere along the way. I assumed it was just me not understanding how it works. But, today someone else got bit with the same stumbling block so I wanted to talk through the basics of how the modeling works within Biml and subsequent posts will show the source of the issue and a work around.

Preface

Biml allows you to define the tables, views, and constraints in your database. Let's look at a minimal viable table definition for dbo.AWBuildVersion from AdventureWorks2014. Ready?

<Biml xmlns="http://schemas.varigence.com/biml.xsd">
    <Connections>
        <OleDbConnection Name="Adventureworks" ConnectionString="Data Source=localhost\dev2014;Initial Catalog=AdventureWorks2014;Provider=SQLNCLI11;Integrated Security=SSPI;" />
    </Connections>
    <Databases>
        <Database ConnectionName="Adventureworks" Name="AdventureWorks2014" />
    </Databases>
    <Schemas>
        <Schema Name="dbo" DatabaseName="AdventureWorks2014" />
    </Schemas>
    <Tables>
        <Table Name="AWBuildVersion" SchemaName="AdventureWorks2014.dbo">
            <Columns>
                <Column Name="SystemInformationID" DataType="Byte" IdentityIncrement="1" />
                <Column Name="Database Version" DataType="String" Length="25" />
                <Column Name="VersionDate" DataType="DateTime" />
                <Column Name="ModifiedDate" DataType="DateTime" />
            </Columns>
            <Keys>
                <PrimaryKey Name="PK_AWBuildVersion_SystemInformationID" Clustered="false">
                    <Columns>
                        <Column ColumnName="SystemInformationID" />
                    </Columns>
                </PrimaryKey>
            </Keys>
            <Indexes>
            </Indexes>
        </Table>
    </Tables>
</Biml>

Wow, that's a lot! Let's break it down.

Connections

Our Connections collection has a single entity in it, an OLE DB Connection named Adventureworks (remember, all of this is case sensitive so this Adventureworks is a different beast from AdventureWorks, ADVENTUREWOKRS, etc). This provides enough information to make a database connection. Of note, we have the server and catalog/database name defined in there. Depending on the type of connection used will determine the specific name used i.e. Initial Catalog & Data Source; Server & Database, etc. Look at ConnectionStrings.com if you are really wanting to see how rich (horrible) this becomes.

Databases

A Database (AstDatabaseNode) requires a Name and a ConnectionName. We certainly know the connection since we just defined it in the previous section and so here I'm naming the Database AdventureWorks2014. This just happens to align with the value specified in Initial Catalog but use whatever is natural. Do not name it after the environment though, please. There is nothing worse than talking about an entity named "DevSalesDb" which is referencing the production database but named after the location it was built from.

Schemas

A Schema (AstSchemaNode) requires a Name and a DatabaseName (see above). Since I'm after a table in the dbo schema, I just specify it as the name.

Tables

Finally, the Table (AstTableNode) which requires a Name and a SchemaName. Where have we seen this pattern? However, look at the value of the SchemaName. We have to qualify the schema with the database because we could have two Schema entities in dbo that point to different Database entities.

Once inside the Table entity, we can define our columns, keys, indices as our heart desires.

Wrap up

An amusing side note, if you're using Mist/BimlStudio to import the Schema and Table, the wizard renders all of this correctly, there only seems to be a defect in how I'm scripting the above entities.

Wednesday, November 9, 2016

Playing audio via Biml

Playing audio via Biml

How often do you need to play audio while you're compiling your Biml packages? Never? Really? Huh, just me then. Very well, chalk this blog post as one to show you that you really can do *anything* in Biml that you can do in C#.

When I first learned how I can play audio in .NET, I would hook the Windows Media Player dll and use that. The first thing I then did was create an SSIS package that had a script task which played the A-Team theme song while it ran. That was useless but a fun demo. Fast forward to using Biml and I could not for the life of me get the Windows Media Player to correctly embed in a Biml Script Task. I suspect it's something to do with the COM bindings that Biml doesn't yet support. Does this mean you shouldn't use Biml - Hell no. It just means I've wandered far into a corner case that doesn't yet have support.

Hours before going on the stage for my Summit 2016 presentation, I took another crack at finding a way to play music via .NET and discovered the System.Media.SoundPlayer class and I was ecstatic.

Tada!

You understand this code, it's not hard. I create a string variable to hold the path to my sound file. I picked a sound file in a well known location. I prefaced my string with the @ symbol to avoid having to escape the default windows path separator.

<Biml xmlns="http://schemas.varigence.com/biml.xsd">
<#
string sourceFile = string.Empty;
sourceFile = @"C:\Windows\Media\tada.wav";
System.Media.SoundPlayer player = new System.Media.SoundPlayer(sourceFile);
player.Play();
#>
</Biml>

SSIS package that plays music

Using the above knowledge, we can also have an SSIS package with a script task to play an arbitrary media file

<Biml xmlns="http://schemas.varigence.com/biml.xsd">
    <Packages>
        <Package Name="SoundPlayer" >
            <Variables>
                <Variable Name="AudioPath" DataType="String">http://www.moviewavs.com/0053148414/WAVS/Movies/Star_Wars/imperial.wav</Variable>
            </Variables>
            <Tasks>
                <Script ProjectCoreName="ST_PlayAudio" Name="SCR Echo Back">
                    <ScriptTaskProjectReference ScriptTaskProjectName="ST_PlayAudio" />
                </Script>            
            </Tasks>
        </Package>
    </Packages>
<ScriptProjects>
    <ScriptTaskProject ProjectCoreName="ST_PlayAudio" Name="ST_PlayAudio" VstaMajorVersion="0">
            <ReadOnlyVariables>
                <!-- List all the variables you are interested in tracking -->
                <Variable Namespace="User" VariableName="AudioPath" DataType="String" />
            </ReadOnlyVariables>
            <Files>
                <File Path="ScriptMain.cs" BuildAction="Compile">using System;
using System.Data;
using Microsoft.SqlServer.Dts.Runtime;
using System.Windows.Forms;

namespace ST_PlayAudio
{
    [Microsoft.SqlServer.Dts.Tasks.ScriptTask.SSISScriptTaskEntryPointAttribute]
    public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase
    {
        public void Main()
        {
            string sourceFile = string.Empty;
            sourceFile = Dts.Variables[0].Value.ToString();
            System.Media.SoundPlayer player = new System.Media.SoundPlayer(sourceFile);
            player.Play();
            Dts.TaskResult = (int)ScriptResults.Success;
        }

        enum ScriptResults
        {
            Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,
            Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure
        };
    }
}                </File>
                <File Path="Properties\AssemblyInfo.cs" BuildAction="Compile">
using System.Reflection;
using System.Runtime.CompilerServices;

[assembly: AssemblyVersion("1.0.*")]
                </File>
            </Files>
            <AssemblyReferences>
                <AssemblyReference AssemblyPath="System" />
                <AssemblyReference AssemblyPath="System.Data" />
                <AssemblyReference AssemblyPath="System.Windows.Forms" />
                <AssemblyReference AssemblyPath="System.Xml" />
                <AssemblyReference AssemblyPath="Microsoft.SqlServer.ManagedDTS.dll" />
                <AssemblyReference AssemblyPath="Microsoft.SqlServer.ScriptTask.dll" />
            </AssemblyReferences>
        </ScriptTaskProject>
    </ScriptProjects>
</Biml>

Now, you could marry the two Biml snippets together so that you get audio playing while you build an SSIS package that plays audio, Dawg.