Category Archives: Archive

Ghosts 2007->2010 Upgrade

Ghosts can cause a massive headache when upgrading from SharePoint 2007 to SharePoint 2010, especially if you decide that the upgrade is a good time to re-organize your feature folder hierarchy.

On a recent upgrade, we ran into multiple issues related to ghosted locations for both page layouts and content types.

Page Layouts

When upgrading the solutions to 2010, the code was migrated into clean Visual Studio projects.  What this meant was that all the feature folders were now “cleaned up” by VS, instead of Features/featurename we now have Features/projectname_featurename.  This blew up all of our masterpages and page layouts.  We would get file exceptions from them regardless if they were ghosted or not.  The only solution is either to rename the feature folders and get the file references which are read-only in the database back to a happy state, or rip out all the old masterpage gallery items and recreate them with the new vti_setuppaths.  Given the size of our site, setting all the pages to a dummy layout while we did this wasn’t feasible so we had to rename all the folder to get our ghosts back.

So in the end masterpage gallery = keep your ghosts in the same place, or risk breaking your content.

Rename feature folders in VStudio 2010:
http://blogit.create.pt/blogs/andrevala/archive/2010/08/21/Renaming-a-Feature-in-SharePoint-Tools-for-Visual-Studio-2010.aspx

Content Types

These were another big headache.  To make the content type definitions more manageable, each content type was broken out into its own xml module, and all the fields that they reference were put into one module as well.  The same feature name is now in place, but the xml definition files went from one xml to about 15.

When the database was upgraded and new solutions were deployed we kept getting missing columns and content types.  The features wouldn’t even activate without a -Force flag.

We ran into the same exact situation as mentioned in the below link where the Definitions in the database were null, and upon upgrading, SharePoint didn’t know where to locate the actual xml that defined these fields and content types.

http://social.msdn.microsoft.com/Forums/en/sharepoint2010setup/thread/12effe76-af9d-424a-ab05-6f87d794ded9

There have been other, better posts about ContentTypes made by others but long and short of it is, modifying ContentTypes and fields that are feature deployed at the xml definition = bad.  In 2007 the changes would not propogate to child types properly.  In 2010, the overwrite flag helps, but mileage varies depending on if the ContentType is unghosted vs ghosted.

Looking into the suggested fix at the bottom of that post (modify the site columns in 2007) lead me to believe that these null missing items are coming across in the situations where the feature defined items were ghosted.  If there is a definition on the filesystem, why eat up the database storage by pushing it in there at the start.  We’ll only add the value once the item becomes unghosted.

We’re testing this out now, but I’m comfortable saying that with Content Types
 ghosts = bad; get rid of them.

So, if you’re upgrading and making structure changes to your feature folders and don’t want to modify any of the content, make sure to keep your masterpage gallery items in the same place reghosted to pickup changes and unghost your contenttypes so that they don’t blow up when their xml references are different.

Siteminder Agent for SharePoint 2010 – extra notes

This is more of a reminder for myself but if you ever get a dreaded Tomcat 500 message from the agent and SSL errors in the siteminder logs than the included openssl s_client command will be your friend.

In our case, the reverse proxy servlet was unable to retrieve the SharePoint pages due to certificate validation errors.  Everything on the SiteMinder server looked correct.  We assumed our SharePoint certificates were fine as we could reach the ClaimsWS, providing the certificate for client authentication successfully.

Finally after comparing a working environment against this broken one using the openssl s_client tool.  We found that the full certificate chain was not being sent to SiteMinder.  Turns out one of the intermediate certs was corrupted and showing as self signed instead of pointing back to the root CA cert.

A quick re-export of the intermediate certificate from a working environment and a rebind and we were back in business (after many hours burned on it).

openssl s_client -connect host:port -showcerts

SOPA

An aside with my thoughts on SOPA.

The Stop Online Privacy act has been garnering a lot of attention in the tech community and rightfully so.  If it passes, it has the opportunity to fracture the internet as we know it in the U.S.  Essentially, the act is meant to allow rights holders the ability to call out copyright infringing sites and provide a process that lets authorities shut the sites down.

A little bit of history:

Many people may have seen cease and desist letters before come to them from the MPAA and RIAA in the past threatening legal action for illegally shared content.  Many times, these letters went to people that did not do anything illegal other than being a service provider to the real culprits (parents get letters for teenagers that don’t know any better).  These threats did occasionally reach court in some cases, where the rights holders wanted to make an example of those that were “stealing content”.  The problem was, it punished those that were not the real criminals, and didn’t do anything to stop the root problem.

Eventually ISPs did not want to see their customers sued for all the money that they could otherwise be making off of them themselves.  ISPs started to just cut off access if the rights holders contacted them, usually with a three strikes and you’re out deal.  Again, this didn’t address the root cause and now we have the problem of users being cut off from a multitude of services (least of which a form of speech), without a trial.

We’ve already been through two rounds of rights holders trying to protect their property without addressing the root cause and punishing those in the line of fire blindly.  That is the inherent issue that many have with SOPA.  The bill is not detailed enough to prevent rights holders from crying wolf yet again against those that may not be the root problem.  In addition, it is cutting off access at the very roots of the internet itself.

The provisions allow authorities to seize website addresses at the DNS level.  Think of DNS as the phonebook of the internet.  Everything on the internet, just like the phone system is based on a system of numbers, in this case IP numbers.  DNS maps these numbers to the friendly www urls you type into your browser.  So, say a rights holder reports a website to the authorities, the authorities will then (if they agree), reach out into all the phonebooks in the US and strike that person from the record.  You will not be allowed to reach that site and instead will be sent to a government run seizure page.

There are more than a few flaws in this bill.  For one, how are we going to decide if a site should be shut down.  Will the site owner, paying money for internet access, for electricity and equipment, and working long hours to set up complicated systems be given the right to a fair trial?  Not under the current provisions, an appointed enforcement coordinator will make the determination.  This enforcement coordinator will not be an elected official, or a judicial committee, they will be government assigned “experts” as deemed by the Secretary of State and Secretary of Commerce.  Essentially, whoever is the best deemed candidate that applies for the job, and how many qualified techies are going to chose a government position such as this over opportunities in the private sector?

The most important flaw is that it won’t make much of a difference.  This act will not go after the actual machines running these sites, even if they were truly illegal.  Heck they don’t even pull them off the internet, you will still be able to reach them if you know the IP address, or even easier, if the actual criminals will simply set up shop outside of the US, set up their own DNS servers, and spread by word of mouth.  I believe I read in a Wired article before that much of the hacker underground already operates their own independent sub-internet by this very method.  Even if the criminals don’t, many large companies that are against SOPA could chose to be the champions of freedom by setting up shop themselves outside of the US and doing the same legitimately.  Google is one of the supporters against SOPA and they already host free DNS services.  This very well could fracture the internet.  What once was a global open market and forum, a place where you could easily share your thoughts will start to become a bunch of walled gardens.  Walls with gaping holes that you can get through if you know your stuff.

There are many ways to get around anything that the big rights holding companies could hope to accomplish with this Act, and many things that this could break for legitimate companies, site owners, and end users.

I’m no expert on the bill (I’ll admin I’ve skimmed it), and of course if it does pass, many of people worries may never come to pass, but it opens up a lot of risk without much getting accomplished as I understand it.  These are my thoughts and opinions and I reserve the right to change my mind and/or be swayed otherwise.

SOPA can kill everyone’s SOAP Box

Siteminder Agent for SharePoint 2010

A relatively new offering from CA is the SiteMinder Agent for SharePoint 2010.  I’ve had the “privilege” of working with this product and while I’m impressed with its integration and what it does, be warned, you will need some patience and to be well versed in working on multiple web platforms.

I say this because the installation and configuration is a mashup of vanilla Apache, TomCat, mutliple different SSL tools, some proprietary CA configurations (that are not yet well documented), and all of the usual SharePoint tools (IIS/PowerShell/Claims Based authentication).

From my own experience with SiteMinder, it is very much a Unix targeted product.  As such it is not surprising that it relies on Unix’s web server heavy hitters, Apache and tomcat.  Tomcat is capable of running as an independent web server, or can have traffic routed to it from another webserver such as Apache.  In the case of the Siteminder Agent, it is doing double duty as it uses both modes.

For this reason, if you are a SharePoint administrator seeking to implement the SiteMinder agent, its time to get very familiar with these technologies as well.  Important things to pay attention to if you are a straight IIS admin:

1) Configuration files are case sensitive.  If in doubt, copy and paste your paths.
2) Paths may either require forward slashes where backslashes are usually used in Windows, or they may need to be escaped backslashes.  This depends on which configuration file you’re editing so pay attention.
3) Get comfortable with a command prompt and Notepad (I highly suggest choosing powershell over the vanilla command prompt for authcomplete goodness)

We decided to implement SSL which doubled our complexity.  Additional skill needed here:

1) Familiarty with openssl command line tools.  These will handle your certificates for the Siteminder Apache httpd server
2) Familiarity with Java’s keytool.  This will handle your certificates for the Tomcat server.
3) Windows certificates, and SharePoint’s Trust store.
4) A good understanding of SSL/TLS, the handshake and client authentication for troubleshooting.

Quick note about #3, any SSL service that SharePoint is going to connect to, must have the destination’s SSL certificate (or it’s CA) added to the SharePoint trust store.  It does not use the Windows certificate store to trust remote servers.  But, you’ll still need to be comfortable with working with the Window’s certificate store in order to install and grant your IIS apps access to SSL certificates.  This is to identify your servers to remote machines.  Why they moved the trust store within SharePoint while still requiring knowledge of the Windows Certificate store for its own identification is beyond me.

Quick note about #4, out of the box, one of the services that comes with the Agent for SharePoint requires client SSL authentication.  That is, any server (WFE) attempting to connect to the agent must submit it’s own SSL certificate and the agent must trust and handshake with it.  You can turn this off on the agent side, but it is an added level of security to prevent unauthorized access to your directory of users.

At the end of the day, the CA SiteMinder Agent for SharePoint 2010 is not a small undertaking, so be sure you are familiar with the tools that will need to be used.

Webservice Calls with Windows Claims

We ran into a unique issue recently where we had a need to separate out application pool accounts but still needed to share data across web applications.  The hurdle here is that both applications were protected with claims based authentication using both Windows Claims and a third party Claims provider.

The idea to get around this is to use webservice calls with an elevated account from one web application to pull data from the other.  I’m sure, as with most things SharePoint there are a million and one ways to do this but this is what we went with and we were under a time crunch.

Great, this should be simple, lets just make an HttpWebRequest from one application to the other passing the credentials of the elevated account.  Not so much.  Every time we ran this code, it would just hit a brick wall, if the site was not warmed up, it would just time out.  If the site was warmed up we would get an exception that the target closed the connection.

After some searching I came across these two articlse.

http://msdn.microsoft.com/en-us/library/gg597521.aspx#SPS_LearningClaims_3_Tip2

http://blogs.technet.com/b/speschka/archive/2010/06/04/using-the-client-object-model-with-a-claims-based-auth-site-in-sharepoint-2010.aspx

The webservice call was a REST call, so we could test this in the browser and in doing so I was able to recreate the timeout/closed connection error.  I did notice that once logged in I was able to hit the URL fine.  I fired up fiddler to see if I could figure out what was different and I found that the difference between the requests was the FedAuth cookie mentioned in the above articles.

So how to do this with a set of Windows creds and the Windows claim provider.  The articles only outline how to go over this with an ADFS claims provider.  Back into fiddler, I took a look at the request/response where I first received the auth cookie.  Why not add a request to this url passing in our creds and see what we get.

Success!  You’ll find the code below used to test this out.  The missing piece:

http://hostname/_windows/default.aspx?ReturnUrl=/_layouts/Authenticate.aspx?Source=%252F&Source=/

The code:

static void Main(string[] args)
        {
            #region getAuth
            Console.WriteLine("Enter user domain");
            string domain = Console.ReadLine();
            Console.WriteLine("Enter username");
            string username = Console.ReadLine();
            Console.WriteLine("Enter password");
            string password = Console.ReadLine();

            NetworkCredential nc = new NetworkCredential(username, password, domain);
            CredentialCache ccCreds = new CredentialCache();
            ccCreds.Add(new Uri("http://hostname/"), "NTLM", nc);
            string FedAuth = "";
            try
            {
                Console.WriteLine("Authenticating");
                HttpWebRequest authReq = 
                     HttpWebRequest.Create("http://hostname/_windows/default.aspx?ReturnUrl=/"+
                     "_layouts/Authenticate.aspx?Source=%252F&Source=/") as HttpWebRequest;
                authReq.Method = "GET";
                authReq.Accept = @"*/*";
                authReq.CookieContainer = new CookieContainer();
                authReq.AllowAutoRedirect = false;
                //authReq.UseDefaultCredentials = true;
                authReq.UseDefaultCredentials = false;
                authReq.Credentials = ccCreds;
                HttpWebResponse webResponse = authReq.GetResponse() as HttpWebResponse;
                FedAuth = webResponse.Cookies["FedAuth"].Value;
                webResponse.Close();
            }
            catch (System.Net.WebException e)
            {
                if (e.Response != null)
                {
                    HttpWebResponse webResponse = e.Response as HttpWebResponse;
                    if (webResponse.StatusCode == HttpStatusCode.InternalServerError)
                    {
                        if ((e.Response as HttpWebResponse).Cookies != null)
                        {
                            FedAuth = webResponse.Cookies["FedAuth"].Value;
                        }
                    }
                    webResponse.Close();
                }
            }
            #endregion

            HttpWebRequest hwrTester = (HttpWebRequest)HttpWebRequest.Create("http://host/_vti_bin/listdata.svc");

            if (!String.IsNullOrEmpty(FedAuth))
            {
                Console.WriteLine("Auth found!");
                try
                {
                    hwrTester.Method = "GET";
                    hwrTester.Accept = @"*/*";
                    hwrTester.Headers.Add("Accept-Encoding", "gzip, deflate");
                    hwrTester.KeepAlive = true;
                    CookieContainer cc = new CookieContainer();
                    Cookie authcookie = new Cookie("FedAuth", FedAuth);
                    authcookie.Expires = DateTime.Now.AddHours(1);
                    authcookie.Path = "/";
                    authcookie.Secure = true;
                    authcookie.HttpOnly = true;
                    authcookie.Domain = hwrTester.RequestUri.Host;
                    cc.Add(authcookie);

                    hwrTester.CookieContainer = cc;
                    hwrTester.UseDefaultCredentials = true;
                    //hwrTester.UseDefaultCredentials = false;
                    hwrTester.Credentials = ccCreds;
                    hwrTester.Headers.Add("X-FORMS_BASE_AUTH_ACCEPTED", "f");

                    HttpWebResponse hwrespResp = (HttpWebResponse)hwrTester.GetResponse();
                    StreamReader data = new StreamReader(hwrespResp.GetResponseStream(), true);
                    string output = data.ReadToEnd();
                    data.Close();
                    hwrespResp.Close();
                    Console.WriteLine("Got response from list webservice!");
                    Console.Write(output);
                }
                catch (System.Net.WebException e)
                {
                    if (e.Response != null)
                    {
                        e.Response.Close();
                    }
                }
            }
        }