OK, so not really a technical post, but being from Omaha, I just hit me that I should catalog some the famous among us. (Listed by birth date)
Past:
Fred Astaire - b. 5/10/1899 d. 6/22/1987 -- Actor, Dancer, Choreographer, Singer, Musician
Henry Fonda - b. 5/19/1905 d. 8/12/1982 -- Actor
Gerald R. Ford - b. 7/14/1913 d. 12/26/2006 -- President of the United States
Marlon Brando - b. 4/3/1925 d. 7/1/2004 -- Actor
Malcolm X - b. 5/19/1925 d. 2/21/1965 -- Muslim Minister & Activist
Living:
Warren Buffet - b. 8/30/1930 -- 3rd richest man in the world (Forbes Profile)
Nick Nolte - b. 2/8/1941 -- Actor
Joe Ricketts - b. 7/16/1941 -- Owner Chicago Cubs, CEO of TD Ameritrade (fmr.)
Gale Sayers - b. 5/30/1943 -- NFL Running back (raised in Omaha)
John Beasley - b. 6/26/1943 -- Actor
Johny Rogers - b. 7/5/1951 -- Heisman Trophy Winner
Swoosie Kurtz - b.9/6/1955 -- Actor
Paula Zahn - b. 2/24/1956 -- Journalist & Newscaster
Wade Boggs - b. 6/15/1958 -- Professional Baseball Third Baseman
James M. Connor - b. 6/16/1960 -- Actor
Alexander Payne - b. 2/10/1961 -- Director, Screenwriter & Producer
Nicholas Sparks - b. 12/31/1965 -- Novelist, Screenwriter, & Producer
Calvin Jones - b. 11/27/1970 -- Raiders, Packers & Nebraska Cornhusker Runningback
Huston Alexander - b. 3/22/1972 -- Mixed Martial Artist
Gabriell Union - b. 10/29/1972 -- Actor
Ahman Green - b. 2/16/1977 -- Seahawks, Packers & Nebraska Cornhusker Runningback
Brian Greenberg -b. 5/24/1978 -- Actor
Eric Crouch - b. 9/16/1978 -- Heisman Trophy Winner & Sports Analyst
Chris Klein - b. 3/14/1979 -- Actor (Graduated form Millard West High School)
Andy Roddick - b. 8/30/1982 -- Former World #1 Professional Tennis Player
311 - b. 1988 -- OK, technically a band and not a person
Thursday, October 23, 2014
Friday, August 29, 2014
SQL Server Access Control & Synonyms
OK. So, say you have a SQL Server database and you want to provide varying levels access to different users groups. This database was created for one application, and now you are being asked to provide access to a report development team. How do you go about doing that in some rational way, say the way an admin assigns access to files on a file server.
Well, that would be great, only there aren't any folders in SQL Server. BUT, since SQL Server 2005 we've been given real Schema objects and Synonyms to boot.
SO, how does that help us? Lets take a look, say you built your database and like any rational developer you built everything in DBO. Here's a list of your tables:
dbo.Customers
dbo.PurchaseOrders
dbo.Invoices
dbo.UserLogins
dbo.ApplicationSettings
Obviously you don't want the report developers having access to the UserLogins and ApplicationSettings. One, they just don't need it, and two, there's sensitive stuff in there.
Our approach:
CREATE SCHEMA reports
Step 2) Add some table synonyms to the schema:
CREATE SYNONYM reports.PurchaseOrders FOR dbo.PurchaseOrders
CREATE SYNONYM reports.Invoices FOR dbo.Invoices
CREATE SYNONYM reports.Customers FOR dbo.Customers
Step 3) Map the AD group to your database
CREATE LOGIN [MY_DOMAIN\Report Writers] FROM WINDOWS WITH DEFAULT_DATABASE=[master]
GO
CREATE USER [MY_DOMAIN\Report Writers] FOR LOGIN [MY_DOMAIN\Report Writers]
(This is the same as adding a Login at the server level, and mapping to the public role on a database catalog).
Step 4) Give the report writers access to the reports schema.
GRANT SELECT ON SCHEMA :: reports TO [MY_DOMAIN\Report Writers]
What have we accomplished?
Turns out that in SQL Server Synonyms are like file system hard links. So if you had a file in one directory, and took away permissions on that directory. Then created a hard link in another directory and give permissions, the user would have access. The same idea works here. Since the report writers didn't have access to the DBO schema, they can't view the tables there. But since they have access to REPORTS they may read the synonyms and query them as well.
Turns out that you can customize access to the synonyms once they are created. All of the GANT/DENY/REVOKE commands work the same. You'll even be able to apply column level security!
Well, that would be great, only there aren't any folders in SQL Server. BUT, since SQL Server 2005 we've been given real Schema objects and Synonyms to boot.
SO, how does that help us? Lets take a look, say you built your database and like any rational developer you built everything in DBO. Here's a list of your tables:
dbo.Customers
dbo.PurchaseOrders
dbo.Invoices
dbo.UserLogins
dbo.ApplicationSettings
Obviously you don't want the report developers having access to the UserLogins and ApplicationSettings. One, they just don't need it, and two, there's sensitive stuff in there.
Our approach:
- Use an active directory group (My_Domain\Report Writers) to control access.
- Create a Schema for the report writers to access
- Assign access to the new schema and not the old one.
CREATE SCHEMA reports
Step 2) Add some table synonyms to the schema:
CREATE SYNONYM reports.PurchaseOrders FOR dbo.PurchaseOrders
CREATE SYNONYM reports.Invoices FOR dbo.Invoices
CREATE SYNONYM reports.Customers FOR dbo.Customers
Step 3) Map the AD group to your database
CREATE LOGIN [MY_DOMAIN\Report Writers] FROM WINDOWS WITH DEFAULT_DATABASE=[master]
GO
CREATE USER [MY_DOMAIN\Report Writers] FOR LOGIN [MY_DOMAIN\Report Writers]
(This is the same as adding a Login at the server level, and mapping to the public role on a database catalog).
Step 4) Give the report writers access to the reports schema.
GRANT SELECT ON SCHEMA :: reports TO [MY_DOMAIN\Report Writers]
What have we accomplished?
- Your report writer team can log into your database
- Your report writer team can view all of the table synonyms in Management Studio
- Your report writer team doesn't have any write permissions (INSERT/DELETE/UDPATE) to anything.
- Your report writer team cannot query the objects in DBO directly, so they don't have access to sensitive tables like UserLogins and ApplicationSettings.
Turns out that in SQL Server Synonyms are like file system hard links. So if you had a file in one directory, and took away permissions on that directory. Then created a hard link in another directory and give permissions, the user would have access. The same idea works here. Since the report writers didn't have access to the DBO schema, they can't view the tables there. But since they have access to REPORTS they may read the synonyms and query them as well.
Turns out that you can customize access to the synonyms once they are created. All of the GANT/DENY/REVOKE commands work the same. You'll even be able to apply column level security!
Tuesday, August 12, 2014
Anonymous Performance Point Dashboards (SP2013)
Performance Point (PPS) became part of the Enterprise offering of SharePoint starting with Microsoft Office SharePoint Server 2007. As a tool it was branded as "Bringing BI to the Masses." In SharePoint 2010, it was possible to deploy PPS dashboards to BI sites with anonymous access. SharePoint 15 (2013) broke this, either on purpose or by mistake, and here's how it happened:
Assembly: Microsoft.PerformancePoint.ScoreCard.WebControls.dll
Version: 14.0.0.0
Class: Microsoft.PerformancePoint.ScoreCard.OlapViewCache
Derived Class: System.Web.UI.Page
Assembly: Microsoft.PerformancePoint.ScoreCard.WebControls.dll
Version: 15.0.0.0
Class: Microsoft.PerformancePoint.ScoreCard.OlapViewCache
Derived Class: Microsoft.SharePoint.WebControls.LayoutsPageBase
Differences between version 14 & 15: Other than derived class, none.
Result of the change: _layouts/PPSWebParts/OlapViewCache.aspx requires user authentication with SharePoint 2013 (v15), where as SharePoint 2010 (v14) did not. This means that while the ASPX application page generated by SharePoint designer can be placed in an anonymous access document library, elements referenced on the page via Image (<img src=""/>) tags require authentication. Failure to provide credentials causes the chart elements to not render, causing a critical failure of the dashboard in anonymous access sites.
Here's the work around we implemented.
Note: an IISRESET may be required after placing the files in the 14 & 15 hives.
The following content implements the replacement OlapViewCache.aspx which derives from Page instead of LayoutsPageBase.
Assembly: Microsoft.PerformancePoint.ScoreCard.WebControls.dll
Version: 14.0.0.0
Class: Microsoft.PerformancePoint.ScoreCard.OlapViewCache
Derived Class: System.Web.UI.Page
Assembly: Microsoft.PerformancePoint.ScoreCard.WebControls.dll
Version: 15.0.0.0
Class: Microsoft.PerformancePoint.ScoreCard.OlapViewCache
Derived Class: Microsoft.SharePoint.WebControls.LayoutsPageBase
Differences between version 14 & 15: Other than derived class, none.
Result of the change: _layouts/PPSWebParts/OlapViewCache.aspx requires user authentication with SharePoint 2013 (v15), where as SharePoint 2010 (v14) did not. This means that while the ASPX application page generated by SharePoint designer can be placed in an anonymous access document library, elements referenced on the page via Image (<img src=""/>) tags require authentication. Failure to provide credentials causes the chart elements to not render, causing a critical failure of the dashboard in anonymous access sites.
Here's the work around we implemented.
- Create an ASPX page which duplicates the operations of Microsoft.PerformancePoint.ScoreCard.OlapViewCache.
- Copy the ASPX page from (1) to:
- 15\TEMPLATE\LAYOUTS\PPSWebParts
- 14\TEMPLATE\LAYOUTS\PPSWebParts
Note: an IISRESET may be required after placing the files in the 14 & 15 hives.
The following content implements the replacement OlapViewCache.aspx which derives from Page instead of LayoutsPageBase.
<%@ Page Language="C#" %> <%@ Assembly Name="Microsoft.PerformancePoint.ScoreCards.ServerCommon, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Import Namespace="Microsoft.PerformancePoint.Scorecards" %> <%@ Import Namespace="Microsoft.SharePoint.WebControls" %> <%@ Import Namespace="System" %> <%@ Import Namespace="System.Globalization" %> <%@ Import Namespace="System.Web" %> <%-- Name: OlapViewCache.aspx Deployment Location: 15\TEMPLATE\LAYOUTS\PPSWebParts Description: Replaces the SharePoint 2013 OlapViewCache.aspx utility page. The script code in this file was produced to replicate Microsoft.PerforamcePoint.Scorecards.WebControls which changed inheritance to LayoutsPageBase in SharePoint v15 (2013). In v14, System.Web.UI.Page was the derived class. The change in v15 caused the page to require authentication meanwhile, other dashboard components could be used anonymously. This ASPX class derives from Page once more. --%> <script runat="server" type="text/C#"> private void Page_Load(object sender, EventArgs e) { string externalkey = Request.QueryString["cacheID"]; string s1 = Request.QueryString["height"]; string s2 = Request.QueryString["width"]; string tempFcoLocation = Request.QueryString["tempfco"]; string str1 = Request.QueryString["cs"]; string str2 = Request.QueryString["cc"]; int height; int width; try { height = int.Parse(s1, (IFormatProvider)CultureInfo.InvariantCulture); width = int.Parse(s2, (IFormatProvider)CultureInfo.InvariantCulture); } catch { height = 480; width = 640; } int colStart = 1; int colCount = 100; try { if (str1.Length > 0) colStart = Convert.ToInt32(str1, (IFormatProvider)CultureInfo.CurrentCulture); if (str2.Length > 0) colCount = Convert.ToInt32(str2, (IFormatProvider)CultureInfo.CurrentCulture); } catch { colStart = 1; colCount = 100; } string mimeType; string viewHtml; byte[] bytesImageData; if (!BIMonitoringServiceApplicationProxy.Default .GetReportViewImageData(tempFcoLocation, externalkey, height, width, colStart, colCount, out mimeType, out viewHtml, out bytesImageData)) return; if (mimeType.IndexOf("TEXT", StringComparison.OrdinalIgnoreCase) >= 0) { Response.ContentType = mimeType; Response.Write(viewHtml); } else { if (bytesImageData.Length <= 0) return; Response.Clear(); Response.ContentType = mimeType; Response.BinaryWrite(bytesImageData); HttpContext.Current.ApplicationInstance.CompleteRequest(); } } </script>
Friday, June 13, 2014
Friday Fun: l337 Speek Translator
So, maybe you're not L337 'nough to make your own l337 5P33|< translations. Here's some help along the way. The following HTML script will give you a really simple page to translate plain text.
-Matt
-Matt
<html> <head> <title>L337 #4X0r 5p3e|<</title> <script language="javascript"> var x1at = [ ['A', 'a', '4', '@', '/-\\' ], ['B','b','5','&','8'], ['C','c','¢','(','{','['], ['D','d','|)','|}','|]','|>'], ['E','e','3','3','3','3'], ['F','f'], ['G','6','g'], ['H','#','h'], ['I','i','1','|','l'], ['J','j','|'], ['K','k', '|<'], ['L','l','1','!','|_'], ['M','m','|\\/|','^^'], ['N','^','n','|\\|'], ['O','o','0','()','[]','{}','<>'], ['P','p','p','P'], ['Q','q','9','q','9'], ['R','r','rrr','R'], ['S','s','5','$'], ['T','t','7','+'], ['U','u','|_|'], ['V','v','\\/','`\''], ['W','vv','w','\\/\\/'], ['X','%','x','*'], ['Y','y','`\'|'], ['Z','z','-/_','%'], ] function x147e() { var l337 = document.getElementById("L337"); var _5Rc = document.getElementById("_5Rc"); var c0un7 = document.getElementById("c0un7"); l337.innerHTML = ""; for (var x = 0; x < _5Rc.value.length; x++) { c = _5Rc.value.charAt(x).toUpperCase(); if (c >= 'A' && c <= 'Z') { var i = c.charCodeAt(0) - 'A'.charCodeAt(0); l337.innerHTML += x1at[i][Math.floor(x1at[i].length * (Math.random() - 0.0001)) ]; } else { l337.innerHTML += c; } } c0un7.innerText = "Char Count:" + l337.innerText.length; } </script> </head> <body> <div id="L337" style="white-space: pre; font-family:Trebuchet MS; font-size:14px; width:400px; padding:10px; border:1px solid black; margin-bottom:5px"></div> <div id="c0un7" style="font-size:14px; font-family:Trebuchet MS; margin-bottom:5px; padding:10px">Char Count:</div> <textarea id="_5Rc" cols="50" rows="5" style="margin-bottom:20px"> </textarea><br /> <button onclick="x147e(); return false;">x147e</button> </body> </html>
Monday, June 9, 2014
Information Architecture Re-factorization
Situation: You have a legacy SharePoint 2010 Enterprise site that has grown organically. The web application has a home spun/3rd party global navigation component for the top nav and one site collection for every 1.03 web sites. Plus, there's no discernable taxonomy to the current site structure. Your mission, should you choose to accept: migrate to SharePoint 2013, add support for Managed Metadata from the Term Store, rework the top navigation as a "MegaMenu," and reduce the overall number of site collections. Go!
Lets start with the problem in IA re-factorization and go from there:
Option 1) It's Enterprise, use "Manage Content and Structure"
Option 2) Use some exotic PowerShell magic to detect and recreate all settings.
Option 3) Use some not-so-exotic PowerShell magic, leveraging Import-SPWeb and Export-SPWeb commands.
Option 4) Buy a third-party tool that like those from Metalogix or AvePoint or some other great source.
I'm cheep, so we're going to go with something from options 1-3. That said, we'll build the 2013 farm; copy the 2010 content database (fortunately there's only one); and attach it to a web application we've built for the new deployment. We've now got a 2010-2013 version migrated SharePoint site operating in 2010 visual mode.
Option1: Manage Content and Structure
MCaS is pretty exciting option. It's GUI based, so the learning curve is pretty short. It will let you move Items, Lists or Web Sites between Web Sites or Lists.
Drawback #1, you've got to move one Web Site or List at a time. If you've got a bunch (in my case more than 80), you're going to be at it a while.
Advantages, you get immediate feedback on the move. I've seen that feedback to be wonky. For instance, if the move takes too long, it may seem like it's erred out, but it's actually still plugging away in the background.
Drawback #2, you're going to have to keep both the source and destination content in the same web application. MCaS doesn't work with two different web applications, but is this really a drawback?
Drawback #3, you can't move between site collections inside. So you're stuck in one place.
Option 2: Exotic PowerShell Magic
This option seems the most sexy. Create a totally awesome PowerShell script that recreates one SPWeb in another SPWeb. At the same time, it seems like it's the most error prone, especially when you throw in SharePoint Publishing. The code you write for this needs to duplicate every setting, while accounting for the new URL taxonomy as the content shifts from one location to another. Plus, you'll need to not only duplicate the attributes of SPWeb and SPList, but you'll have to use the secondary object overlays for SPFile, SPFolder and publishing classes like PublishingSite, PublishingWeb and PublishingPage. Too exotic for me...
Drawback #1: Man it's hard to write a script like this. Better buy a tool, or look for some open source scripts...
Option 3: Not-So-Exotic PowerShell Magic
Ok, so this one seems viable to me. Here we can script together a bunch of export-spweb commands with import-spweb commands to effectively extract and load the content in different locations. Best part here is that the content can be in different web applications, and even be in different farms.
Drawback #1, Export-SPWeb will give you some really weird errors, especially if List Templates and their ilk have been removed. The same goes for Import-SPWeb. I've run the Import-SPWeb multiple times and it seems to resolve the problem when there's missing list template content.
Advantage if you can call it that: You couldn't look at the list contents anyway if the templates were missing. They are basically dead items that you have to prune with PowerShell because the GUI won't let you navigate to the list settings pages anyway.
Drawback #2, If you script more than one move together with other moves, you're probably going to have to go digging through a bunch of log files to see what moved and what didn't go. Error reporting isn't a strong suit of these commands.
Personally, I'll recommend Option 1 and Option 3 to my clients, and point them to some third-part solutions. But for the situations I've been apart of, 1 & 3 will be just fine.
Lets start with the problem in IA re-factorization and go from there:
Option 1) It's Enterprise, use "Manage Content and Structure"
Option 2) Use some exotic PowerShell magic to detect and recreate all settings.
Option 3) Use some not-so-exotic PowerShell magic, leveraging Import-SPWeb and Export-SPWeb commands.
Option 4) Buy a third-party tool that like those from Metalogix or AvePoint or some other great source.
I'm cheep, so we're going to go with something from options 1-3. That said, we'll build the 2013 farm; copy the 2010 content database (fortunately there's only one); and attach it to a web application we've built for the new deployment. We've now got a 2010-2013 version migrated SharePoint site operating in 2010 visual mode.
Option1: Manage Content and Structure
MCaS is pretty exciting option. It's GUI based, so the learning curve is pretty short. It will let you move Items, Lists or Web Sites between Web Sites or Lists.
Drawback #1, you've got to move one Web Site or List at a time. If you've got a bunch (in my case more than 80), you're going to be at it a while.
Advantages, you get immediate feedback on the move. I've seen that feedback to be wonky. For instance, if the move takes too long, it may seem like it's erred out, but it's actually still plugging away in the background.
Drawback #2, you're going to have to keep both the source and destination content in the same web application. MCaS doesn't work with two different web applications, but is this really a drawback?
Drawback #3, you can't move between site collections inside. So you're stuck in one place.
Option 2: Exotic PowerShell Magic
This option seems the most sexy. Create a totally awesome PowerShell script that recreates one SPWeb in another SPWeb. At the same time, it seems like it's the most error prone, especially when you throw in SharePoint Publishing. The code you write for this needs to duplicate every setting, while accounting for the new URL taxonomy as the content shifts from one location to another. Plus, you'll need to not only duplicate the attributes of SPWeb and SPList, but you'll have to use the secondary object overlays for SPFile, SPFolder and publishing classes like PublishingSite, PublishingWeb and PublishingPage. Too exotic for me...
Drawback #1: Man it's hard to write a script like this. Better buy a tool, or look for some open source scripts...
Option 3: Not-So-Exotic PowerShell Magic
Ok, so this one seems viable to me. Here we can script together a bunch of export-spweb commands with import-spweb commands to effectively extract and load the content in different locations. Best part here is that the content can be in different web applications, and even be in different farms.
Drawback #1, Export-SPWeb will give you some really weird errors, especially if List Templates and their ilk have been removed. The same goes for Import-SPWeb. I've run the Import-SPWeb multiple times and it seems to resolve the problem when there's missing list template content.
Advantage if you can call it that: You couldn't look at the list contents anyway if the templates were missing. They are basically dead items that you have to prune with PowerShell because the GUI won't let you navigate to the list settings pages anyway.
Drawback #2, If you script more than one move together with other moves, you're probably going to have to go digging through a bunch of log files to see what moved and what didn't go. Error reporting isn't a strong suit of these commands.
Personally, I'll recommend Option 1 and Option 3 to my clients, and point them to some third-part solutions. But for the situations I've been apart of, 1 & 3 will be just fine.
Friday, May 9, 2014
Extending with Science! ... or files
Hey, say you've got like 40,000 files on your hands. AND you've got a public, Enterprise SharePoint 2010 portal. AND you've got a mandate to publish each one of those files to the world. AND you just got kicked off your previous web host. Watcha gonna do?
Well my friend, use SCIENCE!
OK, not really. BUT, SharePoint is a Microsoft.NET application, hosted in Internet Information Services, so we can just add a virtual directory to the SharePoint web application and serve away. Yes, but...
I tried doing just the above once, but didn't discover the secret sauce until just yesterday. Here's the breakdown:
The change? This time around, instead of using pass-through security on the VDIR, I used the connect as option and connect using an account that has access to the files. That solved the Anonymous access problem I was having. Another solution would be to grant the "Everyone" user access to the files. But that's a problem in and of itself, plus if you're using a remote VDIR it probably won't work unless you make the file share wide open.
The other change? Well you need to make sure you establish a web.config file in the directory you're serving. Just inheriting from the base file in SharePoint's home dir won't work. Until I established a custom web.config, SharePoint tried to intercede and bump the URL up against the content database. Not-found is the same as access denied in that world. So in summary:
Well my friend, use SCIENCE!
OK, not really. BUT, SharePoint is a Microsoft.NET application, hosted in Internet Information Services, so we can just add a virtual directory to the SharePoint web application and serve away. Yes, but...
I tried doing just the above once, but didn't discover the secret sauce until just yesterday. Here's the breakdown:
- The 40,000 files are served anonymously (this is actually a good thing)
- Directory access performance isn't really an issue, there aren't a million hits a month let alone a minute.
- There's already a structured navigational approach to finding the files so there's no need to browse them or build a new catalog.
- Establish a virtual directory in your SharePoint web app. If you've got multiple AAMs and/or multiple WFE's you'll have to repeat the following process for each.
- Once you've built the VDIR, double click it's Authentication option. We only needed anonymous so I turned off everything else (More Later).
The change? This time around, instead of using pass-through security on the VDIR, I used the connect as option and connect using an account that has access to the files. That solved the Anonymous access problem I was having. Another solution would be to grant the "Everyone" user access to the files. But that's a problem in and of itself, plus if you're using a remote VDIR it probably won't work unless you make the file share wide open.
The other change? Well you need to make sure you establish a web.config file in the directory you're serving. Just inheriting from the base file in SharePoint's home dir won't work. Until I established a custom web.config, SharePoint tried to intercede and bump the URL up against the content database. Not-found is the same as access denied in that world. So in summary:
- Use a delegation account to access the VDIR
- Establish a web.config that explicitly sets the Anonymous access properties
Wednesday, May 7, 2014
SharePoint Powershell -- Awesomesauce
Introduced with SharePoint 2010 (but actually possible with older versions) we were given direct access to the SharePoint object model in a interpreted scripting environment.
With a simple script like this, I can list all of the lists and libraries in a SharePoint web application:
Great, so what you can list a bunch of the content. Tell me something I can't do with other tools...
OK, here's one. I had a WSP that installed custom page layouts. One of the page layouts got modified by a user using SharePoint designer, and even if I returned the page layout to the original state, upgrading or re-installing the WSP wouldn't overwrite the page layout in the content database.
Solution: Detach the each of the publishing pages from the page layout and make a file that lists each of the publishing pages that were attached to the page layout. Then remove all of the bad stuff using what ever tool you like and reattach the page layouts based on the data we saved in the file.
Check out this script, the first half of the process of detaching the page layouts. $detLog gets set outside the function and is the full page to the change log. Just point the function at an SPWeb and a reference to new page layout (it needs to be a Microsoft.SharePoint.Publishing.PageLayout) and away it goes, cycling through the entire SPWeb and its children. Oh, $comment gets set outside the function too. It could be something snappy like "Detaching Page Layout for Upgrade."
You probably noticed, but this script uses a different way to implement the for-each loops. Instead of using the C# style of the loop, it uses the object pipe method. I'm guessing one method is probably more efficient than the other, but you never know.
With a simple script like this, I can list all of the lists and libraries in a SharePoint web application:
$wa = get-spwebapplication http://site.domain.com
foreach($site in $wa.Sites) {
foreach($web in $site.AllWebs) {
foreach($list in $web.Lists) {
write-host $wa.Name $web.Title $web.Url $list.Title
}
}
}
Great, so what you can list a bunch of the content. Tell me something I can't do with other tools...
OK, here's one. I had a WSP that installed custom page layouts. One of the page layouts got modified by a user using SharePoint designer, and even if I returned the page layout to the original state, upgrading or re-installing the WSP wouldn't overwrite the page layout in the content database.
Solution: Detach the each of the publishing pages from the page layout and make a file that lists each of the publishing pages that were attached to the page layout. Then remove all of the bad stuff using what ever tool you like and reattach the page layouts based on the data we saved in the file.
Check out this script, the first half of the process of detaching the page layouts. $detLog gets set outside the function and is the full page to the change log. Just point the function at an SPWeb and a reference to new page layout (it needs to be a Microsoft.SharePoint.Publishing.PageLayout) and away it goes, cycling through the entire SPWeb and its children. Oh, $comment gets set outside the function too. It could be something snappy like "Detaching Page Layout for Upgrade."
Function BFS-PubPage($web, $newlayout) { $web.Lists | foreach-object -process { $l = $_ $l.Items | foreach-object -process { $i = $_ if ([Microsoft.SharePoint.Publishing.PublishingPage]::IsPublishingPage($i)) { $pp = [Microsoft.SharePoint.Publishing.PublishingPage]::GetPublishingPage($i) if ($pp.Layout -ne $null) { if ($pp.Layout.ServerRelativeUrl -eq "/_catalogs/masterpage/OffendingLayout1.aspx" ` -or $pp.Layout.ServerRelativeUrl -eq "/_catalogs/masterpage/OffendingLayout2.aspx") { write-host -ForegroundColor DarkBlue ($web.Url + "/" + $pp.Url) $cl = $pp.Layout $pp.CheckOut() $pp.Layout = $newlayout $pp.Update() $pp.CheckIn($comment) $pf = $pp.ListItem.File $pf.Publish($comment) $pf.Approve($comment) add-content $detLog ($web.Url + "`t" + $i.Name + "`t" + $pp.Url + "`t" + $cl.ServerRelativeUrl) } else { write-host -ForeGroundColor DarkYellow ` ($web.Url + "/" + $pp.Url +"`t" + $pp.Layout.ServerRelativeUrl) } } } } } $web.Webs | foreach-object -process { if ($_ -ne $null) { BFS-PubPage $_ $newlayout } } }
You probably noticed, but this script uses a different way to implement the for-each loops. Instead of using the C# style of the loop, it uses the object pipe method. I'm guessing one method is probably more efficient than the other, but you never know.
Subscribe to:
Posts (Atom)