SharePoint 2010 Fast Disk Space Part 2 (Solution)


The solution for the issue would be to stop the service and clear the logs. Start the service back try to process the old click analysis. It should typically process those files and improve the link rankings.

Note: This will have an impact on the search results proceed with caution.


Backup the following locations to a different server or a drive.


:\FASTSearch\data\sprel\ (all folder below this)



Stop the sprel process deletes the logs, start the service back and run the SPRel process cmd and ensure everything works as expected.Test all the process in a Test environment before trying it in a live environment.

  1. Run the following command to stop the sprel process:
nctrl stop sprel walinkstorerreceiver
  1. Clean the data/sprel directory (Except config folder)
  2. Run the following command to start the sprel process again:
nctrl start sprel walinkstorerreceiver
  1. Confirm everything is ok by running the following command
spreladmin ShowStatus
  1. Start the processing by executing the following command
spreladmin StartProcessing
  1. Regularly check the status by executing the following command
spreladmin ShowStatus
  1. Once completed, status shows ready doing nothing: ensure the process is completed



we faced issue while incremental crawl. The following error popped up and due to this, some search results were affected.

The Content Plugin received a "Processing Error" response from the backend server
for the item. ( ERROR: PostProcessBatch failure in processor SPCIDStorer. 
Traceback (most recent call last): 
\\ File "", line 63, in PostProcessBatch \\ File "", 
line 111, in send \\ WindowsError: [Error 3] The system cannot find the path 
specified: 'E:\\FASTSE~1\\data\\sprel\\cidstorer_backup/*.*' \\ [17]

Fix: Create an empty folder :\FASTSearch\data\sprel\cidstorer_backup


The expectation is the logs shouldn’t grow at a rapid pace, the clickthrough archive should be processed and important the search service shouldn’t see any>:\FASTSearch\components\resourcestore\generic\clickthrough (data pushed by the SP extraction job) should be empty if the process worked as expected. Monitor the process for a couple of days and ensure there are no errors.

Check the following location should be empty if the process worked as expected.

(data pushed by the SP extraction job)

Start an incremental crawl to ensure there is no impact on the results. Monitor the process for a week and ensure there are no errors in the sprel logs or crawl logs.

Ref Links:


SharePoint 2010 Fast Disk Space Part 1 (Issue)

I recently faced disk space issue on SharePoint Fast 2010 Crawl server. We need to find out what’s consuming the space. If it is the index or any core components we don’t have an option other than increasing the space.

Investigate DiskSpace

Let’s begin the journey with identifying the files, we will use a simple tool (windirstat) to find the space consuming files.


The initial results were showing that the core components are not the issue and all that’s consuming is the logs. I was surprised to see a SPRel log that was 27GB and growing in standard phase, Next big thing was query logs. we have a PowerShell script that cleans up the query logs that are older than 60 days.

  1. FASTSEARCH\var\log\querylogs\  (Safely can be cleaned up)
  2. FASTSEARCH\var\log\sprel\
  3. FASTSEARCH\data\sprel\worker

We realized something is going wrong with the SPRel process. Just to give you an idea of the SPRel service.

“SPRel is a clickthrough log analysis engine that improves search results relevancy by analyzing the entries that users click on in search result sets. In FAST Search server 2010, we can use spreladmin.exe to configure SPRel, to schedule the clickthrough log analyses, and to retrieve status information”

Investigate SPRel

Check if the service is running as expected. Command to check the status should be executed in command prompt

<drive>:FastSearch\Bin\spreladmin showstatus


The process is actually stuck and not working as expected. This has stopped working in 2013 and till now in 2016. I think we are in the right direction. Now we have to analyze the log to find the reasons. Here comes another problem log is too huge to open it in notepad(27GB). It’s a Handy tool that saved my life Large Text File Viewer no installation required doesn’t crash your CPU it lazy loads the file with continuous query mode.


The possible failures found in the log, Disk Space

Disk Space wasn’t enough to process

Impossible to schedule more targets (timeout 60s). 4 ready targets:

make_1032.urls_by_urlid.1_b1. Target was never started. Too large output for the workers. Input size : 153196411

make_1032.urls_by_urlid.3_b3. Target was never started. Too large output for the workers. Input size : 153952094

make_1032.urls_by_urlid.0_b0. Target was never started. Too large output for the workers. Input size : 153724399

make_1032.urls_by_urlid.2_b2. Target was never started. Too large output for the workers. Input size : 153472337

— stopping build.

The Server didn’t respond and it has failed.

systemmsg Reset of analysis completed.
systemmsg Analysis failed 10 times, will retry from start.
systemmsg sprelrsclient failed. stdout: Connected to ResourceStore stderr: Could not list resources.. The remote server returned an error: (500) Internal Server Error.

I will take some time to figure out the solution and will discuss in the next post.

Using jQuery with custom Web services in SharePoint

I had an requirement of using  jQuery for fetching data from my custom web service. The approach was adopted to have better UI experience,

So the user doesn’t feel the post back and flickering of screens.

I faced lot of issues to make it work, with all the note points found by goggling J made it work like charm.

The approach as follows

Step 1. Create a web service

Step 2. Client Side JavaScript definition to consume the web service

Step 1: Create web service

We will create a custom web service with a web method HelloWorld that takes a name as parameter and returns back a string in JSON format.

JSONJava Script Object Notation more info

It’s the same as C# objects. Object.Property to access the value.

Create a new web service project in ASP.NET and the web service definition as follows

namespace WebService1


    using System.ComponentModel;

    using System.Web.Services;

    using System.Web.Script.Services;

    using System.Web.Script.Serialization;


    /// Summary description for Service1


    [WebService(Namespace =]


    [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]


    public class Service1 : System.Web.Services.WebService



        /// Hello world


        ///<param name=”name”>Name</param>

        ///<returns>Hello name</returns>


        [ScriptMethod(ResponseFormat = ResponseFormat.Json)]

        public string HelloWorld(string name)


            responseObject = new


                    MyName = “Hello “ + name


            return new JavaScriptSerializer().Serialize(responseObject);





ü  [ScriptService] to be added to allow calls from java script

ü  [ScriptMethod(ResponseFormat = ResponseFormat.Json)] over the method definition to return JSON

SharePoint context

Deploy the web service to the 12 hive folder to be accessible for any user regardless of the permission rights.

The following handler has to be added to the web.config in the layouts folder to avoid the 500 internal server error that will occur when you call the service from jQuery.


    <add verb=”*” path=”*.asmx” validate=”false” type=”System.Web.Script.Services.ScriptHandlerFactory,

System.Web.Extensions, Version=, Culture=neutral, PublicKeyToken=31BF3856AD364E35″ />


Step 2: Client Side JavaScript definition

jQuery AJAX calls are very simple to use compared to the native AJAX calls.  The call is made in jQuery by calling the function with required parameters.

var Name=“Balaji”;

var servicePath = “/_layouts/1033/myservice/Service1.asmx”;

//the current site url + service url

var serviceURL = location.href.substring(0, location.href.lastIndexOf(‘/’)) + servicePath;


        type: “POST”,

        contentType: “application/json; charset=utf-8”,

        url: serviceURL + “/HelloWorld”,

        data: ‘{“name”:”‘ + Name + ‘”}’,

        dataType: “json”,

        success: processResult,

        error: failure,

        crossDomain: true



//Handle result

function processResult(xData) {

    var responseObject = jQuery.parseJSON(xData.d);




//Handle Failure

function failure(){

         alert(“Error occurred”);



ü  contentType and dataType to be defined as JSON

ü  crossDomain is set to true when you make cross domain queries like your app runs in a different domain and the service runs in another domain.

ü  The parameter passed to the webservice is case sensitive and the format should be JSON.

The parameter “name” in the web method HelloWorld is the same in the jQuery call “data” attribute.

data: ‘{“name”:”‘ + Name + ‘”}’

Hope this kick starts.

CAML Basics

CAML (Collaborative Application Markup Language)

The CAML queries are internally used to query the SharePoint List with specific conditions and formatting. It is very powerful as SQL queries and improves the performance. The CAML query can be split into

  • Display Part
  • Conditional Part

Display Part

They are generally used to display the content in a grouped and ordered manner.

Eg: OrderBy, GroupBy

    <FieldRef Name=’Name’/>
    <FieldRef Name=’Location’/>

Conditional Format

They are used to retrieve data with certain conditions. Like using logical operators. The following example queries will give an overview.

1.Simple Condition

The condition (Name ==”Balaji”) in CAML form would look like

    <FieldRef Name=’Name’/>
    <Value Type=’Text’>Balaji</Value>

2.Multiple Condition

Multiple condition with Logical operators have a tricky part that each logical operator can have two parts more than two should have another logical operator to be defined encapsulating it.

The condition((Name ==”Balaji”) && (Location==”Chennai”) && (ID==1))in CAML form would look like

        <FieldRef Name=’Name’/>
        <Value Type=’Text’>Balaji</Value>
        <FieldRef Name=’Location’/>
        <Value Type=’Text’>Chennai</Value>
      <FieldRef Name=’ID’/>
      <Value Type=’Number’>1</Value>

Normal Operator SharePoint  specific
= Eq
> Gt
< Lt
>= Geq
<= Leq
Like Contains
Null Check IsNull
Not Null NotNull
Begins BeginsWith
Date Range DateRangesOverlap
&& And
|| Or

Generally CAML queries can be easily generated using several tools. Refer the following link for more info

SP Value types

MEF with Silverlight 4

MEF – Managed Extensibility Framework. While building applications for SharePoint using Silverlight came across a scenario of dynamic loading of XAP and loading an application. The main reason moving towards this approach is scalability and to reduce the load time delay in loading the XAP files. This approach serves the request by dynamic loading of the content. I will come up with a simple application demonstrating MEF. Now I will post the quick ref links of MEF.

10 Reasons to use MEF


Part 1 , Part 2 , Part 3 , Part 4 , Part 5


Channel 9 Video series


Cloud Light Application submitted for Mix code challenge just 10Kb of size(More..)

SharePoint DB an Overview

The SharePoint heart is SQL it acts as the brain behind the scenes. SharePoint installed on a server edition OS uses the Internal Database by default. You can manually configure SharePoint to pick the SQL server on Configuration. To check the instances running on the machine you can use the SQL Configuration manager.


The above picture shows two instances Internal Database and SQL Express.

Use SQL management studio for easier access. Use the instance to connect to appropriate instance and expand the databases to view the db attached to the particular instance


The default configured would have three db’s by default

_AdminContent  (Central Admin)

    The central administration db will be generally one per farm.

Wss_Content (Site Content)

    The content database is created one by default that can hold 15000 sites
This can be created separately for each application created in the farm to have control over the growth of the db.

Find the content db in your farm

Central Admin >>Application management >> Content Databases

Wss_Search (Search Index)

This is only applicable if only search index is enabled for the farm.

Note: The default database created will have the following settings auto growth (db would grow as the data increases) and no size limit for the db file. This applies for both the .MDF(SQL Database) and .LDF(SQL Log)

The procedure that can avoid the situations. The following are my personal view and take necessary backup before you proceed.

Pre – Action
  1. Connect to instance with Management studio
  2. Right-Click the db properties


3.Select the Files and you can see two files the db and the log

4.Click the button in the Autogrowth button and you will find the following window.

image5. Set the Restricted file growth as plan and that size should be greater than the existing file.


Post – Action

The Pre-action would help once a new server is setup. The main case is everyone miss this step and finally once log grows huge they search around for an solution. Check this might help. The below is my view and take precautions before giving a try.

  1. Stop SharePoint services and IIS
  2. Connect to the SQL Instance
  3. Expand the Databases item
  4. select the db right click >>Tasks >> Detach
  5. Rename the log file of the detached db
  6. Attach the db manually using the following query in SQKlMGmt
  7. sp_attach_single_file_db @dbname= ‘databasename’, @physname= ‘C:\Databases\databasefile.mdf’

7. Start SharePoint services and IIS that generates a new log file.