Tag Archives: SharePoint

SharePoint 2013 App Domain dedicated Port binding vs dedicated IP binding

I’ve recently been researching into what exactly is needed for implementing the App Model in SharePoint 2013.  Notably the environment into which we would be deploying SharePoint 2013 doesn’t support creating multiple IPs on a single box, so I had to get creative with the IIS bindings and instead use port separation.

The issue with this is that this isn’t a documented use case by Microsoft, so time to dig under the covers.

The app domains that you configure for a farm are essentially root domains for automatically generated hostname based site collections (HNSC).  You give SharePoint an app domain of myspapps.com and SharePoint dynamically generates HNSCs with each app “install”.

This is why the Site Subscription service (which powers HNSCs) is required to enable apps.  So we set up a wildcard DNS entry, point it at some sort of reverse proxy or load balancer, and direct it at our servers on a custom nonstandard http/https port.

It would be great if this worked out of the box, but for any redirect URLs that get sent back to the browser client, we see that the custom port number is sent back.  If you don’t have your proxy or load balancer configured to take this into account, you end up with 404s.  This is most evident in the out of the box defaultcss.ashx.  Your app will load but the out of the box stylesheets will fail.

In normal site collections/web applications this is not an issue as we can set up alternate access mappings.  Public = reverse proxy URL, internal = URL on custom port, or in the case of an HNSC, the commandlet you use to create the site collection also allows you to specify the port.

Microsoft already recognized a limitation in that apps could not support multiple auth zones, or HNSCs and provided a solution in the March PU They added a few new commandlets for generating per web-application app domains with support for multiple zones as well.  The new commandlets take advantage of more of the HNSC functionality and as a side effect it now allows for specifying a port for an app domain mapping.  While the documentation covers configuration at a per zone/HNSC app domain, it does go far enough to state that a single app domain can be shared so long as the authentication scheme and app pools match.

This gives us the ability to have the same exact functionality with the ability for SharePoint to properly resolve and regenerate URLs with the proper public URL format regardless of the IIS binding. Follow the link at the source for instructions on how to leverage the new commandlets if you’re running into challenges enabling Apps with a separate listener URL and public URL. You’ll want to pass the app domain and port for the public URL along to the commandlet and make sure your reverse proxy/load balancer passes on the Host: header to match.

Commandlet: New-SPWebApplicationAppDomain
Source: http://technet.microsoft.com/en-us/library/dn144963.aspx

*Note, the new commandlets are not yet in the commandlet reference since those haven’t been published since July 2012.

Ghosts 2007->2010 Upgrade

Ghosts can cause a massive headache when upgrading from SharePoint 2007 to SharePoint 2010, especially if you decide that the upgrade is a good time to re-organize your feature folder hierarchy.

On a recent upgrade, we ran into multiple issues related to ghosted locations for both page layouts and content types.

Page Layouts

When upgrading the solutions to 2010, the code was migrated into clean Visual Studio projects.  What this meant was that all the feature folders were now “cleaned up” by VS, instead of Features/featurename we now have Features/projectname_featurename.  This blew up all of our masterpages and page layouts.  We would get file exceptions from them regardless if they were ghosted or not.  The only solution is either to rename the feature folders and get the file references which are read-only in the database back to a happy state, or rip out all the old masterpage gallery items and recreate them with the new vti_setuppaths.  Given the size of our site, setting all the pages to a dummy layout while we did this wasn’t feasible so we had to rename all the folder to get our ghosts back.

So in the end masterpage gallery = keep your ghosts in the same place, or risk breaking your content.

Rename feature folders in VStudio 2010:

Content Types

These were another big headache.  To make the content type definitions more manageable, each content type was broken out into its own xml module, and all the fields that they reference were put into one module as well.  The same feature name is now in place, but the xml definition files went from one xml to about 15.

When the database was upgraded and new solutions were deployed we kept getting missing columns and content types.  The features wouldn’t even activate without a -Force flag.

We ran into the same exact situation as mentioned in the below link where the Definitions in the database were null, and upon upgrading, SharePoint didn’t know where to locate the actual xml that defined these fields and content types.


There have been other, better posts about ContentTypes made by others but long and short of it is, modifying ContentTypes and fields that are feature deployed at the xml definition = bad.  In 2007 the changes would not propogate to child types properly.  In 2010, the overwrite flag helps, but mileage varies depending on if the ContentType is unghosted vs ghosted.

Looking into the suggested fix at the bottom of that post (modify the site columns in 2007) lead me to believe that these null missing items are coming across in the situations where the feature defined items were ghosted.  If there is a definition on the filesystem, why eat up the database storage by pushing it in there at the start.  We’ll only add the value once the item becomes unghosted.

We’re testing this out now, but I’m comfortable saying that with Content Types
 ghosts = bad; get rid of them.

So, if you’re upgrading and making structure changes to your feature folders and don’t want to modify any of the content, make sure to keep your masterpage gallery items in the same place reghosted to pickup changes and unghost your contenttypes so that they don’t blow up when their xml references are different.

Siteminder Agent for SharePoint 2010 – extra notes

This is more of a reminder for myself but if you ever get a dreaded Tomcat 500 message from the agent and SSL errors in the siteminder logs than the included openssl s_client command will be your friend.

In our case, the reverse proxy servlet was unable to retrieve the SharePoint pages due to certificate validation errors.  Everything on the SiteMinder server looked correct.  We assumed our SharePoint certificates were fine as we could reach the ClaimsWS, providing the certificate for client authentication successfully.

Finally after comparing a working environment against this broken one using the openssl s_client tool.  We found that the full certificate chain was not being sent to SiteMinder.  Turns out one of the intermediate certs was corrupted and showing as self signed instead of pointing back to the root CA cert.

A quick re-export of the intermediate certificate from a working environment and a rebind and we were back in business (after many hours burned on it).

openssl s_client -connect host:port -showcerts

Webservice Calls with Windows Claims

We ran into a unique issue recently where we had a need to separate out application pool accounts but still needed to share data across web applications.  The hurdle here is that both applications were protected with claims based authentication using both Windows Claims and a third party Claims provider.

The idea to get around this is to use webservice calls with an elevated account from one web application to pull data from the other.  I’m sure, as with most things SharePoint there are a million and one ways to do this but this is what we went with and we were under a time crunch.

Great, this should be simple, lets just make an HttpWebRequest from one application to the other passing the credentials of the elevated account.  Not so much.  Every time we ran this code, it would just hit a brick wall, if the site was not warmed up, it would just time out.  If the site was warmed up we would get an exception that the target closed the connection.

After some searching I came across these two articlse.



The webservice call was a REST call, so we could test this in the browser and in doing so I was able to recreate the timeout/closed connection error.  I did notice that once logged in I was able to hit the URL fine.  I fired up fiddler to see if I could figure out what was different and I found that the difference between the requests was the FedAuth cookie mentioned in the above articles.

So how to do this with a set of Windows creds and the Windows claim provider.  The articles only outline how to go over this with an ADFS claims provider.  Back into fiddler, I took a look at the request/response where I first received the auth cookie.  Why not add a request to this url passing in our creds and see what we get.

Success!  You’ll find the code below used to test this out.  The missing piece:


The code:

static void Main(string[] args)
            #region getAuth
            Console.WriteLine("Enter user domain");
            string domain = Console.ReadLine();
            Console.WriteLine("Enter username");
            string username = Console.ReadLine();
            Console.WriteLine("Enter password");
            string password = Console.ReadLine();

            NetworkCredential nc = new NetworkCredential(username, password, domain);
            CredentialCache ccCreds = new CredentialCache();
            ccCreds.Add(new Uri("http://hostname/"), "NTLM", nc);
            string FedAuth = "";
                HttpWebRequest authReq = 
                     "_layouts/Authenticate.aspx?Source=%252F&Source=/") as HttpWebRequest;
                authReq.Method = "GET";
                authReq.Accept = @"*/*";
                authReq.CookieContainer = new CookieContainer();
                authReq.AllowAutoRedirect = false;
                //authReq.UseDefaultCredentials = true;
                authReq.UseDefaultCredentials = false;
                authReq.Credentials = ccCreds;
                HttpWebResponse webResponse = authReq.GetResponse() as HttpWebResponse;
                FedAuth = webResponse.Cookies["FedAuth"].Value;
            catch (System.Net.WebException e)
                if (e.Response != null)
                    HttpWebResponse webResponse = e.Response as HttpWebResponse;
                    if (webResponse.StatusCode == HttpStatusCode.InternalServerError)
                        if ((e.Response as HttpWebResponse).Cookies != null)
                            FedAuth = webResponse.Cookies["FedAuth"].Value;

            HttpWebRequest hwrTester = (HttpWebRequest)HttpWebRequest.Create("http://host/_vti_bin/listdata.svc");

            if (!String.IsNullOrEmpty(FedAuth))
                Console.WriteLine("Auth found!");
                    hwrTester.Method = "GET";
                    hwrTester.Accept = @"*/*";
                    hwrTester.Headers.Add("Accept-Encoding", "gzip, deflate");
                    hwrTester.KeepAlive = true;
                    CookieContainer cc = new CookieContainer();
                    Cookie authcookie = new Cookie("FedAuth", FedAuth);
                    authcookie.Expires = DateTime.Now.AddHours(1);
                    authcookie.Path = "/";
                    authcookie.Secure = true;
                    authcookie.HttpOnly = true;
                    authcookie.Domain = hwrTester.RequestUri.Host;

                    hwrTester.CookieContainer = cc;
                    hwrTester.UseDefaultCredentials = true;
                    //hwrTester.UseDefaultCredentials = false;
                    hwrTester.Credentials = ccCreds;
                    hwrTester.Headers.Add("X-FORMS_BASE_AUTH_ACCEPTED", "f");

                    HttpWebResponse hwrespResp = (HttpWebResponse)hwrTester.GetResponse();
                    StreamReader data = new StreamReader(hwrespResp.GetResponseStream(), true);
                    string output = data.ReadToEnd();
                    Console.WriteLine("Got response from list webservice!");
                catch (System.Net.WebException e)
                    if (e.Response != null)

SharePoint 2010 – Lost View Breadcrumb with Multiple Webparts on List View Pages

Have you ever tried to add a Content Editor web part to a 2010 list view say with instructions for the list only to find that you lose the Views dropdown in the breadcrumb?

We had such views on many lists in our MOSS environment only to run into this loss of functionality when we ran through the upgrade. I’ve scoured the internet and picked apart the views to figure out a workarounds.

The first thing I came across was this article/product by Pentalogic:


This got me started on the way to finding a good solution to the problem. I’ll admit that I spent quite some time playing with reflecting the ListTitleViewSelectorMenu as described to alter the Visible field/property. I was largely unsuccessful as I really didn’t want to put the time into something that could break with an update or service pack. I was about to give up when I read that the ListTitleViewSelectorMenu really uses the ListViewSelectorMenu for the actual rendering.

Instead of reflecting the sealed private property, why not cut to the chase and just add in a second non-hidden ListViewSelectorMenu? Sprinkle in some generic controls for the separator and styling, a touch of logic to ignore the case where there is only one web part and success we have our drop-down back.

I largely want to credit the Pentalogic folks for leading me heavily down the right path but I did want to share a method of getting around this without using reflection. Its not the cleanest as you’ll have two of the same basic control on the page with one hidden, but its more future proof.