So last minute a vendor offered me a pass to the Gartner event down in San Diego, looks like some fun and I’m always down to network and hang out with fellow geeks.
What’s interesting is how most of the topics are fairly generic in the Web world and yet one product is mentioned specifically by name in some of the Panels and Workshops. SharePoint 2010/2013 appears to be a hot topic and is drawing quite a few speakers all things considered. Of course, the fact that Yammer is one of their platinum sponsors probably doesn’t hurt.
Engage at the Nexus of Social, Mobile, Information and Cloud
Gartner Portals, Content & Collaboration Summit delivers the tools and insights needed to tap into unprecedented portals, content and collaboration opportunities. Disruptive trends are yielding an array of business-critical imperatives: Deliver secure access across a widening range of devices. Mine and leverage nontraditional content. Use social software to drive efficiency and innovation. Exploit context-aware computing.
This summit represents the single most important event in the portals, content and collaboration space, where IT and business leaders gather to learn from the latest Gartner research and interact with 24+ Gartner analysts, peers and solution leaders. Experience new research and innovative thinking in a variety of session formats that drill down to your most critical topics.
They also hit pretty hard on the concepts of a mobile workforce, especially true in today’s world. But it’s not just about being mobile either, it’s really about being able to build the best teams utilizing the best talent from anywhere in the world through the use of mobile technology. There is no reason why you can’t have two outstanding team members who just happen to sit at opposite ends of the country in their own home offices. The resistance to having cube drones who have to sit on the same floor or next to each other in cubes is swiftly evaporating.
Here’s their key benefits section on the conference:
Through analyst sessions, roundtables, workshops, tutorials and end-user case studies, you’ll gain the insight to:
Empower mobile workforces securely
Know when to use cloud and SaaS options
Deliver the experiences customers and constituents have come to expect
Harness social to drive innovation and delight customers and employees
Demonstrate the business value of portals, content and collaboration technologies
Add increasing amounts of context to everything you do
Understand how PCC impacts organizational dynamics
Use analytics to improve business processes
So, anyone else going to this event? Drop me a line here or shoot me a PM over at Twitter.
Back when I was doing a lot of testing with loading test data into sharepoint lists and then dumping them to run another set of test data, the traditional method of deleting one at a time from the list was really slow, so I created a script to use the ProcessBatchData command to create an XML command structure to delete the items in batches so I could run several automated tests over with different conditions. Worked well enough that I went ahead and uploaded it to the MS gallery for others to use who were in the same situation. Just checked and we’ve passed the 100 downloads milestone. So yeah! Glad people find it useful
I was working on a search server recently and started getting errors that the crawl component was failing to “CreateTempFolderForCacheFiles”
As it turns out, the environment I was working on was an extremely secure farm where permissions were locked down and often shares were not permitted and accounts were allowed the minimum permissions they needed in order to run. In this case, the local temporary folder where the index files are created had been deleted, and the search service did not have permissions to recreate the folder. This was blocking the crawls from proceeding and they were just sitting there.
In order to fix the issue, the folders need to be recreated. What is nice is that using PowerShell you can quickly recreate the folders in the correct location using the following script:
I’m not an expert with FAST, I just have to deal with it. This is a fun little thing to have happened recently. SharePoint adoption has been going really well. More people are using it, more people are adding content, more content is being indexed, more space is being used.
The drive that we installed FASTsearch on is fairly small for drives these days, roughly 136GB of free disk space. This particular company also has a policy of 80% utilization of drives before an alert goes off to go look at the server disk utilization and reduce it. As I know from getting these alerts, when FAST is doing an index, there are times where the %FASTSEARCH%\tmp and the %FASTSEARCH%\data\data_index directories get pretty full. Like an extra 60GB worth of full. This is enough that along with the other items on the drive it tips past 80% utilization and I get the email alert. This is because FAST Search Server keeps a read-only binary index file set to serve queries while building the next index file set. The worst-case disk space usage for index data is approximately 2.5 times the size of a single index file set.
This generally happens at night and by morning all the indexing is done and the drives have plenty of space in them. It’s not really worth ordering another drive at this point as it is a temporary condition and doesn’t significantly affect performance, but would be nice to minimize the number of alerts I get by reconfiguring FAST to use a different drive for temporary files and even for storing index data.
As it turns out, FAST is not as easily configurable as say, Windows when you want to move the temp directory, or even SharePoint when you want to move the ULS and usage logs. A quick Bing search did not turn up any useful articles about how to reconfigure how FAST utilizes directories, except for one example which suggested editing some of the XML – bad idea because you are in an unsupported configuration and possible possible upgrade issues in the future. Turns out the resolution for this was actually relatively simple though thanks to the use of junction points.
In my case, we had just purchased a rather large HDD of 300GB in which to house the ULS and usage logs because Microsoft Best Practices in regards to SharePoint says keep the log files and the SharePoint binaries on separate drives if possible, and 300GB was the smallest standard the company supported when we ordered them. This means I had a lot of free space out there on the new drive, if only I could get FAST to utilize it.
In my case, I was able to use KB2506015 to reconfigure the directories with junction points, stay in a supported mode, and utilize the extra space.
If additional storage can be added to the server, the entire %FASTSEARCH%\data directory can be moved to a new location with the same permissions ("Full Control", granted to the FASTSearchAdministrators local group), and connected back to the installation via a junction point. To do so, follow the steps below on each FAST index server:
Stop the FAST Search for SharePoint service.
Stop the FAST Search for SharePoint Monitoring service.
Move %FASTSEARCH%\data to the larger storage you have added.
Run the following in a command prompt: mklink /j %FASTSEARCH%\data %NEW_LOCATION%\data
Start the FAST Search for SharePoint Service.
Please note that while other methods of relocating the index outside of the %FASTSEARCH% parent directory are not supported, the entire parent directory can also be moved to a new physical location without using a junction point (skipping step #4 above) if the drive letter, path, and permissions remain identical.
One I started back up the FAST Search for SharePoint Service everything came up perfect and I could perform a crawl without issues.