Feed aggregator

Information about HeartBleed and IIS

The Official Microsoft IIS Site - Jue, 10/04/2014 - 03:58
The Heartbleed vulnerability in OpenSSL ( CVE-2014-0160 ) has received a significant amount of attention recently. While the discovered issue is specific to OpenSSL, many customers are wondering whether this affects Microsoft’s offerings, specifically Windows and IIS.  Microsoft Account and Microsoft Azure, along with most Microsoft Services, were not impacted by the OpenSSL vulnerability. Windows’ implementation of SSL/TLS was also not impacted. We also want to assure our customers that default...(read more)

FTP ETW Tracing and IIS 8 - Part 2

The Official Microsoft IIS Site - Mié, 09/04/2014 - 18:17

Shortly after I published my FTP ETW Tracing and IIS 8 blog post, I was using the batch file from that blog to troubleshoot an issue that I was having with a custom FTP provider. One of the columns which I display in my results is Clock-Time, which is obviously a sequential timestamp that is used to indicate the time and order in which the events occurred.

(Click the following image to view it full-size.)

At first glance the Clock-Time values might appear to be a range of useless numbers, but I use Clock-Time values quite often when I import the data from my ETW traces into something like Excel and I need to sort the data by the various columns.

That being said, apart from keeping the trace events in order, Clock-Time isn't a very user-friendly value. However, LogParser has some great built-in functions for crunching date/time values, so I decided to update the script to take advantage of some LogParser coolness and reformat the Clock-Time value into a human-readable Date/Time value.

My first order of business was to figure out how to decode the Clock-Time value; since Clock-Time increases for each event, it is obviously an offset from some constant, and after a bit of searching I found that the Clock-Time value is the offset in 100-nanosecond intervals since midnight on January 1, 1601. (Windows uses that value in a lot of places, not just ETW.) Once I had that information, it was pretty easy to come up with a LogParser formula to convert the Clock-Time value into the local time for my system, which is much easier to read.

With that in mind, here is the modified batch file:

@echo off

rem ======================================================================

rem Clean up old log files
for %%a in (ETL CSV) do if exist "%~n0.%%a" del "%~n0.%%a"

echo Starting the ETW session for full FTP tracing...
LogMan.exe start "%~n0" -p "IIS: Ftp Server" 255 5 -ets
echo.
echo Now reproduce your problem.
echo.
echo After you have reproduced your issue, hit any key to close the FTP
echo tracing session. Your trace events will be displayed automatically.
echo.
pause>nul

rem ======================================================================

echo.
echo Closing the ETW session for full FTP tracing...
LogMan.exe stop "%~n0" -ets

rem ======================================================================

echo.
echo Parsing the results - this may take a long time depending on the size of the trace...
if exist "%~n0.etl" (
   TraceRpt.exe "%~n0.etl" -o "%~n0.csv" -of CSV
   LogParser.exe "SELECT [Clock-Time], TO_LOCALTIME(ADD(TO_TIMESTAMP('1601-01-01 00:00:00', 'yyyy-MM-dd hh:mm:ss'), TO_TIMESTAMP(DIV([Clock-Time],10000000)))) AS [Date/Time], [Event Name], Type, [User Data] FROM '%~n0.csv'" -i:csv -e 2 -o:DATAGRID -rtp 20
)

When you run this new batch file, it will display an additional "Date/Time" column with a more-informative value in local time for the sever where you captured the trace.

(Click the following image to view it full-size.)

The new Date/Time column is considerably more practical, so I'll probably keep it in the batch file that I use when I am troubleshooting. You will also notice that I kept the original Clock-Time column; I chose to do so because I will undoubtedly continue to use that column for sorting when I import the data into something else, but you can safely remove that column if you would prefer to use only the new Date/Time value.

That wraps it up for today's post. :-)

(Cross-posted from http://blogs.msdn.com/robert_mcmurray/)

FTP ETW Tracing and IIS 8

The Official Microsoft IIS Site - Mar, 08/04/2014 - 23:35

In the past I have written a couple of blogs about using the FTP service's Event Tracing for Windows (ETW) features to troubleshoot issues; see FTP and ETW Tracing and Troubleshooting Custom FTP Providers with ETW for details. Those blog posts contain batch files which use the built-in Windows LogMan utility to capture an ETW trace, and they use downloadable LogParser utility to parse the results into human-readable form. I use the batch files from those blogs quite often, and I tend to use them a lot when I am developing custom FTP providers which add new functionality to my FTP servers.

Unfortunately, sometime around the release of Windows 8 and Windows Server 2012 I discovered that the ETW format had changed, and the current version of LogParser (version 2.2) cannot read the new ETW files. When you try to use the batch files from my blog with IIS 8, you see the following errors:

Verifying that LogParser.exe is in the path...
Done.

Starting the ETW session for full FTP tracing...
The command completed successfully.

Now reproduce your problem.

After you have reproduced your issue, hit any key to close the FTP tracing session. Your trace events will be displayed automatically.

Closing the ETW session for full FTP tracing...
The command completed successfully.

Parsing the results - this may take a long time depending on the size of the trace...
Task aborted.
Cannot open <from-entity>: Trace file "C:\temp\ftp.etl" has been created on a OS version (6.3) that is not compatible with the current OS version


Statistics:
-----------
Elements processed: 0
Elements output: 0
Execution time: 0.06 seconds

I meant to research a workaround at the time, but one thing led to another and I simply forgot about doing so. But I needed to use ETW the other day when I was developing something, so that seemed like a good time to quit slacking and come up with an answer. :-)

With that in mind, I came up with a very easy workaround, which I will present here. Once again, this batch file has a requirement on LogParser being installed on your system, but for the sake of brevity I have removed the lines from this version of the batch file which check for LogParser. (You can copy those lines from my previous blog posts if you want that functionality restored.)

Here's the way that this workaround is implemented: instead of creating an ETW log and then parsing it directly with LogParser, this new batch file invokes the built-in Windows TraceRpt command to parse the ETW file and save the results as a CSV file, which is then read by LogParser to view the results in a datagrid like the batch files in my previous blogs:

@echo off

rem ======================================================================

rem Clean up old log files
for %%a in (ETL CSV) do if exist "%~n0.%%a" del "%~n0.%%a"

echo Starting the ETW session for full FTP tracing...
LogMan.exe start "%~n0" -p "IIS: Ftp Server" 255 5 -ets
echo.
echo Now reproduce your problem.
echo.
echo After you have reproduced your issue, hit any key to close the FTP
echo tracing session. Your trace events will be displayed automatically.
echo.
pause>nul

rem ======================================================================

echo.
echo Closing the ETW session for full FTP tracing...
LogMan.exe stop "%~n0" -ets

rem ======================================================================

echo.
echo Parsing the results - this may take a long time depending on the size of the trace...
if exist "%~n0.etl" (
   TraceRpt.exe "%~n0.etl" -o "%~n0.csv" -of CSV
   LogParser.exe "SELECT [Clock-Time], [Event Name], Type, [User Data] FROM '%~n0.csv'" -i:csv -e 2 -o:DATAGRID -rtp 20
)

Here's another great thing about this new batch file - it will also work down-level on Windows 7 and Windows Server 2008; so if you have been using my previous batch files with IIS 7 - you can simply replace your old batch file with this new version. You will see a few differences between the results from my old batch files and this new version, namely that I included a couple of extra columns that I like to use for troubleshooting.

(Click the following image to view it full-size.)

There is one last thing which I would like to mention in closing: I realize that it would be much easier on everyone if Microsoft simply released a new version of LogParser which works with the new ETW format, but unfortunately there are no plans at the moment to release a new version of LogParser. And trust me - I'm just as depressed about that fact as anyone else. :-(

(Cross-posted from http://blogs.msdn.com/robert_mcmurray/)

Kudu for Azure Websites updated with ‘Process Explorer’ tab

The Official Microsoft IIS Site - Sáb, 05/04/2014 - 03:54

I’m sure everyone appreciates the pace in which Azure Websites team releasing cool features. Azure Websites was all over the announcements in the recent //build. The team has updated the Kudu console with new tab named ‘Process Explorer’. You will see it in the list of options available in the site. To access the Kudu console, go to https://yourwebsite.scm.azurewebsites.net (note the https, and .scm in the url).

Read more about this here.

Announcing:Smooth Streaming Plugin for OSMF with WAMS MPEG-DASH support (Beta)

The Official Microsoft IIS Site - Vie, 04/04/2014 - 22:08
Windows Azure Media Services team is very pleased to announce beta version of Microsoft Smooth Streaming plugin for OSMF with WAMS MPEG-DASH support.  <?xml:namespace prefix = "o" ns = "urn:schemas-microsoft-com:office:office" />Using Smooth Streaming OSMF plugin, you can add Smooth Streaming and Windows Azure Media Services on-demand MPEG-DASH(beta) content playback capabilities to existing OSMF and Strobe Media Playback players and furthermore build rich media experiences for Adobe Flash Player endpoints using Windows Azure Media Services you use today to target Smooth Streaming playback to other devices like Win8 store apps, browser and so on. This version of the Smooth Streaming plugin includes the following capabilities and works with OSMF 2.0 APIs:
  • On-demand Smooth Streaming/Windows Azure Media Services on-demand MPEG-DASH playback (Play, Pause, Seek, Stop)
  • Live Smooth Streaming playback (Play)
  • Live DVR functions (Pause, Seek, DVR Playback, Go-to-Live)
  • Support for video codecs – H.264
  • Support for Audio codecs – AAC
  • Multiple audio language switching with OSMF built-in APIs
  • Max playback quality selection with OSMF built-in APIs
  • This version only supports OSMF 2.0

Note:  This is a prerelease software and features, support and APIs are subject to change!

The initial release notes are available through MS Download Center and can be found here. For getting started with this new plugin, you can download the Smooth Streaming plugin for OSMF from the MS Download Center. Basic information for building an OSMF player with Smooth Streaming plugin and loading Smooth Streaming dynamic plugin to Strobe Media Player, can be found here.

To enable Windows Azure Media Services MPEG-DASH support:

Please refer to Dynamic Packaging configuration page and also here is a great post from my colleague John Deutcher for how to details (MPEG DASH preview from Windows Azure Media Services).  

Feedback

If you have feature requests, or want to provide general feedback—we want to hear it all! Please use the Smooth Streaming plugin for OSMF Forum thread to let us know what’s working, what isn’t, and how we can improve your Smooth Streaming development experience for OSMF applications. 

Installing WordPress, PHP, and MySQL on Windows Server 2012 R2

The Official Microsoft IIS Site - Jue, 03/04/2014 - 01:21
Microsoft’s Web Platform Installer (Web PI) makes installing applications a breeze. In a recent blog post I covered just how easy installing IIS has become using Web PI. In this walkthrough I’m going to cover installing WordPress, PHP, and MySQL using Web PI. I remember the days when installing these applications was a manual process. Depending on your level of expertise it was quite a challenge to get everything working properly. If you’ve ever tried to uninstall and then reinstall MySQL you know...(read more)

Announcing: Microsoft Smooth Streaming Client SDK for Windows Phone 8.1

The Official Microsoft IIS Site - Mié, 02/04/2014 - 23:00

Windows Azure Media Services team is very pleased to announce Smooth Streaming Client SDK for Windows Phone 8.1 aligned with the announcement of Windows Phone 8.1 at //Build/ conference today.

This release includes the same features as Smooth Streaming Client SDK for Windows 8/8.1 and also uses the same API set which will help unifying development afford across Windows, Windows Phone and XBOX ONE applications.<?xml:namespace prefix = "o" ns = "urn:schemas-microsoft-com:office:office" />

 

 

 

 Smooth Streaming Client SDK release supports:

  • WP 8.1 store app XAML and HTML5/JS applications
  • On-demand Playback (Play, Pause, Seek, Stop)
  • Live playback with seeking capabilities (Play, Pause, Seek, Go-to-Live)*
  • Support for video codes - H.264, VC-1
  • Support for Audio codecs - AAC, WMA Pro
  • Multiple audio language switching with APIs*
  • Track-selection for playback (for example, restrict the available bitrates)*
  • Text and Spare Track APIs*
  • Content protection - Microsoft PlayReady integration.(PlayReady SDK can be downloaded from here)
  • Trickplay (slow motion, fast-forward and rewind)
(**) this version is only supported on Windows Phone 8.1 and with Visual Studio 2013 update 2. You can get the Windows 8 version from here and Windows 8.1 version from here.

(*) Some of these features aren't supported with Windows Phone 8.1 default APIs. To enable these features, you need to use the Smooth Streaming Client SDK APIs.

Getting started 

Announcing: Microsoft Smooth Streaming Client 2.5 with MPEG DASH support

The Official Microsoft IIS Site - Mié, 02/04/2014 - 22:37

The PlayReady team, working in conjunction with the Windows Azure Media Services team is pleased to announce the availability of the Microsoft Smooth Streaming Client 2.5 with MPEG DASH support. This release adds the ability to parse and play MPEG DASH manifests in the Smooth Streaming Media Engine (SSME) to provide a Windows7/Windows8 and MacOS solution using MPEG DASH for On-Demand scenarios.

<?xml:namespace prefix = "o" />Developers that wish to move content libraries to DASH have the option of using DASH in places where Silverlight is supported. The existing SSME object model forms the basis of DASH support in the SSME. For example, DASH concepts like Adaptation Sets and Representations have been mapped to their logical counterpart in the SSME. Also, Adaptation Sets are exposed as Smooth Streams and Representations are exposed as Smooth Tracks. Existing Track selection and restriction APIs can be expected to function identically for Smooth and DASH content. In most other respects, DASH support is transparent to the user and a programmer who has worked with the SSME APIs can expect the same developer experience when working with DASH content.

Some details on the DASH support compared to Client 2.0:

  • A new value of ‘DASH’ has been added to the ManifestType enum. DASH content that has been mapped into Smooth can be identified by checking this property on the ManifestInfo object. Additionally the ManifestInfo object’s version is set to 4.0 for DASH content.
  • Support has been added for the four DASH Live Profile Addressing modes: Segment List, Segment Timeline, Segment Number, and Byte Range
    • For byte range addressable content, segments defined in the SIDX box map 1:1 with Chunks for the track.
  • A new property, MinByteRangeChunkLengthSeconds, has been added to Playback Settings to provide SSME with a hint at the desired chunk duration.
  • Multiple movie fragments will be addressed in a single chunk such that all but the last chunk will have a duration greater than or equal to this property. For examples of how to set Playback Settings see the Smooth documentation.
<?xml:namespace prefix = "o" ns = "urn:schemas-microsoft-com:office:office" />There are some limitations in this DASH release, including:
  • Dynamic MPD types are not supported
  • Multiple Periods are not supported in an MPD
  • The EMSG box is not supported.
  • The codec and content limitations that apply to Smooth similarly apply to DASH. (see http://msdn.microsoft.com/en-us/library/cc189080(v=vs.95).aspx)
  • Seeking is supported, but not trick play. Playback rate must be 1.
  • Multiplexed streams are not supported.
For more information and getting started with this new SDK, you can download the Smooth Streaming Client 2.5 from the MS Download Center.

Feedback

If you have feature requests, or want to provide general feedback—we want to hear it all! Please use the Smooth Streaming Client 2.5 forum thread to let us know what’s working.

Top support solutions!

The Official Microsoft IIS Site - Mié, 02/04/2014 - 01:42
Having been around for so long, and encompassing so many technologies, information about using IIS and solving problems is more than abundant. The IIS.Net website alone has thousands of article, which can make it challenging to find what you need. To make things easier for IIS and other products, Microsoft support has setup a new blog resource called “Top Support Solutions”, which offers a hand-picked selection of links and information about Microsoft’s leading products. Wei Zhao from Microsoft support...(read more)

Ten down and counting

The Official Microsoft IIS Site - Mar, 01/04/2014 - 14:01
As much as I like practical jokes, a day like April Fools’ Day hardly seems like a time for celebration. But that it is. Ten years ago today, I began an adventure that brought me to where I am today. After a comprehensive interview process, I was offered a position that would start on April 1. I though surely it must be an April Fool’s joke. But, 10 years later, I am still employed with OrcsWeb , a company that provided managed hosting solutions. The last ten years as a systems administrator...(read more)

March 2014 IIS Community Newsletter

The Official Microsoft IIS Site - Dom, 30/03/2014 - 15:46
The March 2014 Edition of the IIS Community Newsletter has been published. You’ll want to check it out for some of the latest news and highlights in the IIS community over the past month. Best of all your blog posts and knowledge contribute to it and help make it all possible! http://www.iisnewsletter.com/archive/march2014.html Don’t forget to subscribe today and have it delivered directly to your inbox. http://www.iisnewsletter.com/subscribe.aspx We appreciate your support and love to...(read more)

Azure Web Sites – Continuous Deployment with Staged Publishing

The Official Microsoft IIS Site - Jue, 27/03/2014 - 22:04
In the beginning of the year Windows Azure Web Sites team has released a preview of the Staged Publishing functionality. The Staged Publishing allows you to create a staging site slot where you can publish a new version of the website and then test it before swapping to the production environment. This feature together with Continuous Deployment via GitHub, BitBucket or DropBox enables some very powerful deployment scenarios. However the preview release did not provide the optimal experience for...(read more)

Azure Web Sites – Continuous Deployment with Staged Publishing

RuslanY Blog - Jue, 27/03/2014 - 22:04

In the beginning of the year Windows Azure Web Sites team has released a preview of the Staged Publishing functionality. The Staged Publishing allows you to create a staging site slot where you can publish a new version of the website and then test it before swapping to the production environment. This feature together with Continuous Deployment via GitHub, BitBucket or DropBox enables some very powerful deployment scenarios.

However the preview release did not provide the optimal experience for enabling Continuous Deployment (CD) for staging site. User had to configure a non-trivial workaround as described in blog post by Rick Rainey. Recently the Azure Web Sites team has released an update that fixes that problem and makes the setup of the CD with staged publishing very simple. This blog post describes how to enable CD from git repository located on BitBucket.

First, in order to use staged publishing you need to scale your web site to Standard mode:

After that you will be able to enable staged publishing:

When staged publishing is enabled you will see the staging site slot in portal.

Select it and then click on “Set up deployment from source control”:

In my case I created a repository on BitBucket where I have one simple php page just for demo purposes:

The repository gets synchronized with the staging site slot and then when I browse to it I get the result of executing the PHP script that was checked in to my repository.

The script detects whether it runs in production or staging slot by checking the Host header of the request:

<html> <head> <title>My first PHP page</title> </head> <body> <?php echo "<h1>Hello World! This is a new version!</h1>"; $host = $_SERVER['HTTP_HOST']; $is_production = TRUE; if (strpos($host, 'staging') !== FALSE) { $is_production = FALSE; } if ($is_production === TRUE) { echo "<h2>This code is running in production.</h2>"; } else { echo "<h2>This code is running in staging.</h2>"; } ?> </body> </html>

Now that I have tested the script in staging environment I can swap it to production environment.

After that when I browse to the production site I get the expected result:

Now let’s assume that I want to deploy a new version of my script. I make a change in the script, check it in and push it to the BitBucket repository. After that I go back to the deployment history of my staging site and do a sync operation. That pulls my recent changes into the staging slot:

Now I can test how it works by browsing to the staging site:

Note that the production site at ruslanycd.azurewebsites.net is not affected while I am testing and debugging my staging site. Once I am done with verification I do the swap operation again and the latest change is now live in the production site:

 

Hyper-V Networking–Router Guard

Virtual PC Guy - Mar, 25/03/2014 - 20:40

Router guard is another advanced networking feature that was added in Windows Server 2012:

When you enable Router Guard Hyper-V switch will discard the following packets:

  • ICMPv4 Type 5 (Redirect message)
  • ICMPv4 Type 9 (Router Advertisement)
  • ICMPv6 Type 134 (Router Advertisement)
  • ICMPv6 Type 137 (Redirect message)

Much like DHCP guard – the two most common questions I get about router guard are:

  1. Why would I want to enable this option?

    Imagine you have a virtual machine that is configured for routing services and is connected to multiple virtual networks.  You want to make sure that routing services are only provided on one specific virtual network.  In this case you would enable the router guard on any networks where you did not want the virtual machine to act as a router.

  2. Why isn’t this option enabled by default everywhere?

    Router guard does have a, relatively minimal, impact on performance.  Given that most virtual machines are not running routing services it is not enabled by default, as it is not needed.

You can configure this setting through the UI or with PowerShell.  To configure it with PowerShell you should use the RouterGuard parameter on the Set-VMNetworkAdapter cmdlet:

Cheers,
Ben

Categorías: Virtualización

Convert a Folder to an Application on a Remote IIS Host

The Official Microsoft IIS Site - Mar, 25/03/2014 - 15:07
The topic recently came up of the best way to convert a folder to an application on remote shared hosting server. Some hosts may have this built-in to their control panel. I know many (like Cytanium) have options in the … Read more »...(read more)

Hyper-V Networking–DHCP Guard

Virtual PC Guy - Lun, 24/03/2014 - 19:02

If you start digging into the advanced settings section of a virtual network adapter – there is a lot of interesting stuff to look at.  Today I’m going to talk about the DHCP guard setting:

This setting stops the virtual machine from making DHCP offers over this network interface.  To be clear – this does not affect the ability to receive a DHCP offer (i.e. if you need to use DHCP to acquire an IP address that will work) it only blocks the ability for the virtual machine to act as a DHCP server.

Two questions that I often get about this feature are:

  1. Why would I want to enable this option?

    Imagine you have a DHCP server virtual machine that is connected to multiple virtual networks.  You want to make sure that DHCP offers are only provided on one specific virtual network.  In this case you would enable the DHCP guard on any networks where you did not want the virtual machine to act as a DHCP server.
  2. Why isn’t this option enabled by default everywhere?

    DHCP guard does have a, relatively minimal, impact on performance.  Given that most virtual machines are not running DHCP servers it is not enabled by default, as it is not needed.

You can configure this setting through the UI or with PowerShell.  To configure it with PowerShell you should use the DHCPGuard parameter on the Set-VMNetworkAdapter cmdlet:

Cheers,
Ben

Categorías: Virtualización

Personalizing Removable Drive Icons for Windows Explorer

The Official Microsoft IIS Site - Sáb, 22/03/2014 - 08:52
Like most people these days, I tend to swap a lot of removable storage devices between my ever-growing assortment of computing devices. The trouble is, I also have an ever-growing collection of removable storage devices, so it gets difficult keeping track of which device is which when I view them in Windows Explorer. The default images are pretty generic, and even though I try to use meaningful names, most of the drives look the same: By using a simple and under-used Windows feature, I have been...(read more)

My Daily Hyper-V Status Email–Part 5 of 5

Virtual PC Guy - Vie, 21/03/2014 - 17:07

After displaying event logs, virtual machine health and storage health – the last thing that is included in my daily status email is usage data.

For this I take advantage of the built in metrics functionality that is part of Hyper-V.

Looking at this report – I realize I should probably filter our replicated virtual machines (those are all the entries with zero data).  I guess I will have to fix that at some point in the future.  Regardless – here is the code that I use today:

# VM Metrics $message = $message + "<style>TH{background-color:blue}TR{background-color:$($tableColor)}</style>" $message = $message + "<B>Virtual Machine Utilization Report</B> <br> <br> "   $message = $message + "CPU utilization data: <br>" + ($metricsData | ` select-object @{Expression={$_.VMName};Label="Virtual Machine"}, ` @{Expression={$_.AvgCPU};Label="Average CPU Utilization (MHz)"} ` | ConvertTo-HTML -Fragment) ` +" <br>" $message = $message + "Memory utilization data: <br>" + ($metricsData | ` select-object @{Expression={$_.VMName};Label="Virtual Machine"}, ` @{Expression={$_.AvgRAM};Label="Average Memory (MB)"}, ` @{Expression={$_.MinRAM};Label="Minimum Memory (MB)"}, ` @{Expression={$_.MaxRAM};Label="Maximum Memory (MB)"} ` | ConvertTo-HTML -Fragment) ` +" <br>" $message = $message + "Network utilization data: <br>" + ($metricsData | ` select-object @{Expression={$_.VMName};Label="Virtual Machine"}, ` @{Expression={"{0:N2}" -f (($_.NetworkMeteredTrafficReport | where-object {($_.Direction -eq "Inbound")}` | measure-object TotalTraffic -sum).sum / 1024)};Label="Inbound Network Traffic (GB)"}, ` @{Expression={"{0:N2}" -f (($_.NetworkMeteredTrafficReport | where-object {($_.Direction -eq "Outbound")} ` | measure-object TotalTraffic -sum).sum / 1024)};Label="Outbound Network Traffic (GB)"} ` | ConvertTo-HTML -Fragment) ` +" <br>" $message = $message + "Disk utilization data: <br>" + ($metricsData | ` select-object @{Expression={$_.VMName};Label="Virtual Machine"}, ` @{Expression={"{0:N2}" -f ($_.TotalDisk / 1024)};Label="Disk Space Used (GB)"} ` | ConvertTo-HTML -Fragment) ` +" <br>" $message = $message + "Metering Duration data: <br>" + ($metricsData | ` select-object @{Expression={$_.VMName};Label="Virtual Machine"}, ` @{Expression={$_.MeteringDuration};Label="Metering data duration"} ` | ConvertTo-HTML -Fragment) ` +" <br>"   # Reset metrics get-vm | Reset-VMResourceMetering get-vm | Enable-VMResourceMetering

Notes about this code:

  • $metricsData contains the output of “get-vm | measure-vm” (this is mentioned in my first post in this series).  The reason why I do this is because measure-vm is a heavy command (it uses a chunk of CPU and disk) so I only want to run it once.
  • Once again - I use raw HTML to set the color of the table headers. 
  • Again - I run the output of these commands through Select-Object with the use of the “Expression” option to set column labels appropriately.
  • Again - I use ConvertTo-HTML –Fragment to get a nice HTML table outputted.
  • At the end of this code I reset the counters, and enable metering on all virtual machines.  I do this so that if I add any new virtual machines, they get picked up automatically.

Cheers,
Ben

Categorías: Virtualización

My Daily Hyper-V Status Email–Part 4 of 5

Virtual PC Guy - Jue, 20/03/2014 - 17:48

Now that I have talked about displaying event log information and virtual machine health information; the next part of my status email is storage health information.

In my experience – the most common failure for my servers is a failed hard disk.  Now, as I have multiple levels of redundancy configured in my storage configuration, it is not always obvious that a disk has failed.  Luckily, it is very easy to get this information with PowerShell.

In fact, this is one of the primary reasons why I like using storage spaces.  The great integration with PowerShell.  Here is the code that I use to generate this table:

# Storage Health $message = $message + "<style>TH{background-color:DarkGreen}TR{background-color:$($errorColor)}</style>" $message = $message + "<B>Storage Health</B> <br> <br>" $message = $message + "Physical Disk Health: <br>" + ((Get-PhysicalDisk | ` Select-Object @{Expression={$_.FriendlyName};Label="Physical Disk Name"}, ` @{Expression={$_.DeviceID};Label="Device ID"}, ` @{Expression={$_.OperationalStatus};Label="Operational Status"}, ` @{Expression={$_.HealthStatus};Label="Health Status"}, ` @{Expression={"{0:N2}" -f ($_.Size / 1073741824)};Label="Size (GB)"} ` | ConvertTo-HTML -Fragment) ` | %{if($_.Contains("<td>OK</td><td>Healthy</td>")){$_.Replace("<tr><td>", "<tr style=`"background-color:$($tableColor)`"><td>")}else{$_}}) ` + " <br>" $message = $message + "Storage Pool Health: <br>" + ((Get-StoragePool | ` where-object {($_.FriendlyName -ne "Primordial")} | ` Select-Object @{Expression={$_.FriendlyName};Label="Storage Pool Name"}, ` @{Expression={$_.OperationalStatus};Label="Operational Status"}, ` @{Expression={$_.HealthStatus};Label="Health Status"} ` | ConvertTo-HTML -Fragment) ` | %{if($_.Contains("<td>OK</td><td>Healthy</td>")){$_.Replace("<tr><td>", "<tr style=`"background-color:$($tableColor)`"><td>")}else{$_}}) ` + " <br>" $message = $message + "Virtual Disk Health: <br>" + ((Get-VirtualDisk | ` Select-Object @{Expression={$_.FriendlyName};Label="Virtual Disk Name"}, ` @{Expression={$_.OperationalStatus};Label="Operational Status"}, ` @{Expression={$_.HealthStatus};Label="Health Status"} ` | ConvertTo-HTML -Fragment) ` | %{if($_.Contains("<td>OK</td><td>Healthy</td>")){$_.Replace("<tr><td>", "<tr style=`"background-color:$($tableColor)`"><td>")}else{$_}}) ` + " <br>"

Notes about this code:

  • I am using “Get-PhysicalDisk”, “Get-StoragePool” and “Get-VirtualDisk” to gather the raw data.
  • Once again - I use raw HTML to set the color of the table headers. 
  • Again - I run the output of these commands through Select-Object with the use of the “Expression” option to set column labels appropriately.
  • Again - I use ConvertTo-HTML –Fragment to get a nice HTML table outputted.
  • Again – I implement color coding for individual entries in the table.  I set each table cell to be “red” by default.  I then do some string parsing to see if the health is good – and switch the background color if I get a positive result.

Cheers,
Ben

Categorías: Virtualización

My Daily Hyper-V Status Email–Part 3 of 5

Virtual PC Guy - Mié, 19/03/2014 - 21:01

Continuing on with my daily status email series; after displaying event log information, my email displays a high level summary of the virtual machine health:

These tables are generated with the following code:

# VM Health $message = $message + "<style>TH{background-color:Indigo}TR{background-color:$($errorColor)}</style>" $message = $message + "<B>Virtual Machine Health</B> <br> <br>" $message = $message + "Virtual Machine Health: <br>" + ((Get-VM | ` Select-Object @{Expression={$_.Name};Label="Name"}, ` @{Expression={$_.State};Label="State"}, ` @{Expression={$_.Status};Label="Operational Status"}, ` @{Expression={$_.UpTime};Label="Up Time"} ` | ConvertTo-HTML -Fragment) ` | %{if($_.Contains("<td>Operating normally</td>")){$_.Replace("<tr><td>", "<tr style=`"background-color:$($warningColor)`"><td>")}else{$_}} ` | %{if($_.Contains("<td>Running</td><td>Operating normally</td>")){$_.Replace("<tr style=`"background-color:$($warningColor)`"><td>", "<tr style=`"background-color:$($tableColor)`"><td>")}else{$_}}) ` + " <br>" # VM Replication Health $message = $message + "<style>TH{background-color:Indigo}TR{background-color:$($errorColor)}</style>" $message = $message + "<B>Virtual Machine Replication Health</B> <br> <br>" $message = $message + "Virtual Machine Replication Health: <br>" + ((Get-VM | ` Select-Object @{Expression={$_.Name};Label="Name"}, ` @{Expression={$_.ReplicationState};Label="State"}, ` @{Expression={$_.ReplicationHealth};Label="Health"}, ` @{Expression={$_.ReplicationMode};Label="Mode"} ` | ConvertTo-HTML -Fragment) ` | %{if($_.Contains("<td>Replicating</td><td>Normal</td>")){$_.Replace("<tr><td>", "<tr style=`"background-color:$($tableColor)`"><td>")}else{$_}}) ` + " <br>"

Both of these tables are generated by taking the output of “Get-VM” and displaying different information.

Notes about this code:

  • Once again - I use raw HTML to set the color of the table headers. 
  • Again - I run the output of these commands through Select-Object with the use of the “Expression” option to set column labels appropriately.
  • Again - I use ConvertTo-HTML –Fragment to get a nice HTML table outputted.
  • This time I do something different to get color coding for individual entries in the table.  I actually set each table cell to be “red” by default.  I then do some string parsing to see if the health is good – and switch the background color if I get a positive result.  The reason why I use this approach is that the list of “known good states” is much smaller than the list of “known bad states”.

Cheers,
Ben

Categorías: Virtualización
Distribuir contenido