In The Mix

As a SharePoint architect I have the business behind me and the Developers and IT Pro on my shoulders.

Adding Primary and Dependent Lookup Fields using JSOM October 7, 2013

Filed under: JavaScript,SharePoint,SharePoint 2013 — fmuntean @ 9:10 am

There is a lot of information out there on how to add Lookup fields using JSOM or JavaScript Client Object Model but there is no info on how to add dependent lookup fields.

The MSDN documentation is sketchy at best and does not even document the necessary parameters.

After some fiddling  around with a useless server response and many combination of calls and thanks to Fiddler help I got the secondary lookup fields working.

So to keep this short the code samples are as follow:

1. Adding primary Lookup field:

/// Parameters:
/// clientContext:     the JSOM clientContext
/// onList:            the current list where the lookup field has to be created
/// toList:            the list where this lookup field will look for the data
/// fieldXml:          the xml definitionn for the lookup field
/// toLookupFieldName: the field name in the toList that is used to copy the value over.
var AddPrimaryLookupField = function(clientContext, onList, toList,fieldXml, toLookupFieldName)
  // Add Lookup Field
  var field = onList.get_fields().addFieldAsXml(fieldXml,true,SP.AddFieldOptions.defaultValue);

  //cast the field to a lookup field
  var fieldLookup = clientContext.castTo(field, SP.FieldLookup);

  //Setting up the Lookup list ID and the Field to lookup inside that list
  fieldLookup.set_lookupField(toLookupFieldName || "Title");



    Function.createDelegate(this, function () {
    //success code here
    Function.createDelegate(this, function (sender, args) {
    //failure code here

2. Adding dependent or secondary lookup fields:

/// Parameters:
/// clientContext:     the JSOM clientContext
/// onList:            the current list where the lookup field has to be created
/// toList:            the list where this lookup field will look for the data
/// toLookupFieldName: the field name in the toList that is used to copy the value over.
/// lookupFieldName:   the name of the dependent lookup field
var AddDependentLookupField = function(clientContext, onList, lookupFieldName, toList, toLookupFieldName)
  //get the field to be linked to
  var toField = toList.get_fields().getByInternalNameOrTitle(toLookupFieldName);

  // Add dependent Lookup Field
  var field = onList.get_fields().addDependentLookup(lookupFieldName, primaryField, toField);
  //cast the field to a lookup field
  var fieldLookup = clientContext.context.castTo(field, SP.FieldLookup);
  // Even if we specify the field in the addDependentLookup we still have to set it here again
  fieldLookup.set_lookupField(toLookupFieldName || "Title");


    Function.createDelegate(this, function () {
      //success code here
    Function.createDelegate(this, function (sender, args) {
     //failure code here

Is Active Directory a thing of the Past? February 3, 2012

Filed under: Question,SharePoint — fmuntean @ 9:56 pm

With the new information that we received lately from Microsoft I wonder if Active Directory will soon be a thing of the past.

Now let me explain why:

1. Windows Phone 7 already using Windows Live authentication

2. Windows 8 will introduce the possibility to log into the system using Windows Live account

3. Office 2010 and more Office 15 supports windows live

4. SharePoint 15 will support OAuth2 out of the box which is used by Windows Live


Currently everywhere I go I need a separate user name and password then sometime even security questions or captcha on top.

Will be a time when the true “Single Sign On” be a reality?

There is still a long path but I can see the future when we do not have to remember hundreds of passwords but just provide everybody with a unique identity “us”.


I’ll let you wonder and debate if that is what you want and/or needed. 


SPTraceView v1.0 Released June 28, 2011

Filed under: SharePoint — fmuntean @ 7:42 pm

Yesterday we finally released the version 1 for the SharePoint Trace View tool available on CodePlex.

SPTraceView analyses in real time the ULS trace messages coming from all SharePoint components and can notify you using a balloon-style tray bar messages when something of interest happens. This functionality is targeted mainly for people that develop and/or test custom SharePoint applications. It could be also useful to administrators to diagnose and troubleshoot their SharePoint farm.

As soon as you run it, SPTraceView will start receiving all messages from SharePoint.



  • Receives messages directly from the ETW logging system.
  • Easily enable/disable processing of the messages.
  • Process them and filter based on different criteria.
  • Balloon-style real time notification.
  • Log to xml files.
  • Trace to the Debug View for further filtering and processing.
  • Monitor entire Farm remotely.
  • Automatic update notification.


You can download the current version from here:

Also a version supporting SharePoint 2010 is available too and work is done to release a stable version soon.


Where should I keep my Configuration May 29, 2010

Filed under: SharePoint — fmuntean @ 8:15 pm

This is one of the though questions when talking about custom SharePoint applications. Out of the box there is no specialized service to provide configuration API for your application. However being a platform SharePoint  provides many places that you can use to keep your configuration. Let’s see what are your options and some pro and cons about each.

1. Web.Config:

If you are an ASP.NET developer then usually your first place to use would be the web.config. The problem is that most likely you production SharePoint uses multiple servers and manually changing the web.config is a nightmare and unadvisable. The only acceptable way to change the web.config is to use SPWebConfigModification class and a feature to install and uninstall your modifications.

2. SPPersistedObject.Properties:

There are many places inside the SharePoint Object Model that you can use to store your configuration. All the objects derived from the SPPersistedObject have a Properties HashTable where you can store configuration. The exist at all levels: SPFarm, SPWebApplication, SPWeb, SPFeatureDefinition just to name a few. At the same time you can create your own Persisted Object that you can use as your configuration object as part of the Farm.

3. XML File:

There is nothing to stop you to load a configuration file in SharePoint and load it inside your application. However you might want to encrypt this file if you have sensitive data that you don’t want to be in clear.


While for all other places you will need to build some kind of UI to deal with setting the configuration  values by using SharePoint Lists you get that for free. To read the values just use CAML Queries from your code.


InfoPath Task Forms Made Easy Part 2 November 27, 2009

Filed under: InfoPath,SharePoint — fmuntean @ 9:52 pm

In a previous post I have talked about an approach of transferring complex data between the workflow and the InfoPath task form.

Now in this post I will give you the implementation described before and give you instructions on how to install and use it.

Download the following and extract it on your development machine. It contains all the necessary files to follow the steps described here.

General Description:

The idea here is to bypass the ItemMetadata.xml file and send the xml that the form will de-serialize and display.

For this I have created a new Content Type derived from the Out Of the Box InfoPath Content Type. The ID for this Content Type is “0x01080100C9C9515DE4E24001905074F980F93161”. When used this content Type will open the InfoPath Task Form but instead of processing the ItemMetadata.xml will read a special key value pair ("IPFormDataXML”) from ExtendedProperties that will contain the xml for that task. Once this task form gets closed the xml data for this task gets serialized back into the special key value pair to be read by the Workflow.

By using this approach we can pass quite complex data, as repeating tables, between the workflow and the task form without passing every value separately using the ItemMetadata.xml. Another benefit would be that you can now validate the data types using the task xsd and have data type safety when transferring information between the Workflow and the task form.


Is quite simple to install this in your farm and does not affect any existing workflows as we are using our own content type.

Inside the zip file you will find the MFDWorkflowIPTask.wsp  SharePoint solution that you will need to install and deploy into your farm using the STSADM and or Central Admin site.  This will install a site collection feature that will enable a new content type (MFD.WorkFlowIPTaskCtype ContentType)
Now enable the MFD.WorkFlowIPTaskCtype feature on the site collection where you will be using Workflows that needs this content type and your done with the installation.

After this nothing actually happen until you are using the new content type into your workflow.

Developer Instructions:

Each time when this content type is used in Workflows the _layouts/MFD/WrkTaskIP.aspx page will be used instead of the OOB one (_layouts/WrkTaskIP.aspx).

I am not going to show here how you use the ItemMetadata.xml file to pass data between the workflow and the task form but I have included a demo with source code into the zip file, ApprovalWF1 project, that shows how to use the ItemMetadata.xml and ExtendedProperties. The demo uses just a single task form but imagine that you might have a complex workflow with over 10 or 20 tasks and you can see how hard is to maintain that code as changes are needed to the Task Forms.

The ApprovalWF2 demo project shows you how to use the new approach by passing the full xml to the task form and again being a demo I am just showing you how to implement it for a single task.

This Approval workflow deal with an Expense Report that contains complex data including repeating tables. Think about the following requirement where you are asked  that your workflow tasks needs to contain all the data from the original expense report so that the approver does not need to open the original report and the approvers can’t see other approvers notes during the approving process. If you want to know why you can’t use the Expense Report form and just add fields for each approver and just let them deal with that form drop me a line.

So now for the solution on how you implement this requirement.

First the forms, Expense Report Form and the the Task Form are both InfoPath forms. There are two ways of building InfoPath forms:

  1. Just start adding the fields as needed: very good for POC or if you don’t need much control over the fields namespace, schema and types.
  2. Start by thinking of the InfoPath like you do for you DB starting with the schema first by manually building your xsd files and then attach them to your InfoPath forms. This will let you control everything including the sharing of data types between multiple forms. This is how I recommend of building any Enterprise level InfoPath Form.

Under the Forms folder you have both, the Expense Report Form and ExpenseReportApproveTaskForm2, including the schema files. To include all the fields from the Expense Report Form (ERF) into the Task Form all you need to do is add the following tags:

   <xsd:import namespace=""
            schemaLocation="ExpenseReport.xsd"/> this will include all the fields defined into the ExpenseReport.xsd that is used by the Expense Report Form;

<xsd:element ref="exp:expenseReport" minOccurs="0"/> that will create an element into the ExpenseReportApproveTask2 Form for all the data included in the Expense Report Form.

Now having the xsd schema files for both of the forms we can actually generate two classes that will match that schema using:

        xsd.exe ExpenseReport.xsd ExpenseReportApproveTask.xsd /c

We will be using those two classes to serialize and de-serialize the xml data for both the Expense Form and Task Form.

Reading Expense Report Form inside workflow:

  • Reading Expense Report Form inside workflow:

    private expenseReport _expense;
    public expenseReport Expenses
        if (_expense == null)
          //Read the Form into memory
          XmlSerializer serializer = new XmlSerializer(typeof(expenseReport));
          using (Stream stream =  
          { _expense = serializer.Deserialize(stream) as expenseReport; }
      return _expense;

  • Create the Task:

    private void CreatingApproveTask(object sender, EventArgs e)
      ApproveTaskId = Guid.NewGuid();
      ApproveTaskProperties.AssignedTo =
      ApproveTaskProperties.SendEmailNotification = true;
      ApproveTaskProperties.Title = string.Format("Approval for: {0}",
      expenseReportApproveTask task = new expenseReportApproveTask();
      task.Decision = string.Empty;
      task.Notes = string.Empty;
      task.expenseReport = this.Expenses;
      ApproveTaskProperties.TaskType = 0;
      ApproveTaskProperties.ExtendedProperties[IPFormDataXMLTag] = 

  • Read the Task:

    private void ApproveTaskChanged(object sender, ExternalDataEventArgs e)
      string IPFormDataXML = this.ApproveTaskAfterProperties.ExtendedProperties 
                   [IPFormDataXMLTag] as string;
      expenseReportApproveTask task = Utility.Deserialize(IPFormDataXML);
      string taskDecision = task.Decision;
      isFinished = !string.IsNullOrEmpty(taskDecision);
      if (isFinished)
        ApproveTaskOutcome = taskDecision;

This should give you an idea on how easy and clean is the code now. You will not have to change this code just because somebody else decided to the rename one of the fields in the Expense Report Form or even added a new field. The code will be the same what changes are the generated classes that will need to be refreshed but that is a recompilation matter not a code change.

The other thing that you should observe here is that we do not use strings anymore to pass data but the class itself which is strong typed so there is no more need of parsing the string into another type like Boolean and hope that nobody change the type on you.

Adding new Task Form, no problem, they follow the same pattern so adding another 10 or 20 forms to the Workflow is a breeze and reduce the spaghetti code needed to wire those up as you will have a different generated class for each.

The only thing that I did not show yet is how to use the new content type and that is easy, just modify the elements.xml for the workflow to specify that the tasks should use the new content type using the:

As you are going to deploy this is production i would recommend to add the MFD.WorkFlowIPTaskCtype feature as a dependency to your workflow feature by adding the following tag inside the Feature tag.
           <ActivationDependency FeatureId="33cf18b1-e091-46c2-9f52-114516731db7"/>

By doing this the system will automatically activate the dependency feature and will fail to activate your feature if the required feature is missing.

Hope this helps to clarify this approach but if not or need more info don’t hesitate to contact me or leave a comment.


How to add Web Parts to the list item forms September 13, 2009

Filed under: SharePoint — fmuntean @ 6:28 pm

As many of you know that list item forms as new, view or edit does not allow for editing the page to add web parts.

One way to overcome this issue is the use of SharePoint Designer.

However during my searches over the internet for something totally unrelated found a post talking about a hidden query string that will enable the web part management for those pages.

So here I am passing along the findings:

  1. Access the custom list and click New.
  2. Append the following to the URL: &PageView=Shared&ToolPaneView=2

That will transform your boring vanilla data entry form from something like this:


To this:



Now I wonder how many other Easter eggs are hidden inside SharePoint ?


Stubbing documents in SharePoint September 9, 2009

Filed under: SharePoint — fmuntean @ 10:31 pm

It is known that SharePoint keeps all its data into the SQL database. Now SQL is a relational database and there is nothing relational about storing documents into the content Database. As your SharePoint grow more and more people are storing documents into SharePoint.

When versioning is enabled on a document library each version of the document is kept separately instead of keeping only de bits that are different.

With many users on the system there is a high probability that the same document is store in multiple places in SharePoint.


To alleviate the problems described before we can store the documents outside SharePoint and keep only the attributes around them into the content database. This will keep the content database smaller and more manageable.


There are two options to achieve this:

  1. Using the External BLOB Storage API the BLOBS (this is how the documents are stored inside SQL database) can be stored outside of SharePoint content database. However this is a farm wide configuration and requires manual management of the orphaned blob files as there is no method in the interface to accommodate for this. For more info on this follow:
  2. Using a stubbing mechanism that I will describe further.


Defined as: a short part of something that is left after the main part has been removed; is the process of replacing the real files with a smaller file containing only the information necessary to retrieve the original file. Now the real file can be store anywhere and in any format as long it can be restored in a timely manner and unmodified.


How can we implement this in SharePoint?

The way that I envision is that you will create a new Stubbing Document Library that will allow for configuration of stubbing mechanism but for the end users will be transparent where the file is located physically.

If implemented correctly not only that the files are stored on an external location but duplicates could be detected and better versioning capabilities would be available thus optimizing the storage even further.


Few pieces are needed to make this work:

  • Event receiver for Add, Update, Delete events that will replace the real file with the stub.
  • A service that will handle the stubbing, versioning, binary diff, and any other management and reporting on the external storage.
  • An HTTP module that will catch the stub just before reaching the client and will call the service to get and return the real file to the client. (there is no Get in the event receivers)
  • Admin and configuration pages under document library settings.
  • If versioning is implemented then would be nice to have ECB items for getting more info about a certain file.


The Event Receiver:

We are going to need at least the following three events to handle the stubbing of a new item, updates to an existing item and deletion of an item from SharePoint.

Item Added: We will let SharePoint to finish uploading the file into the document library and then we are going to call the service  for a new stub as we have a new file. We now can replace the file with the stub keeping the filename and extension he same. the stub can be as simple as an XML.

Item Deleted: We will get the stub and call the web service to delete the file from external storage.

Item Updating: We will call the service for a new stub that is a child of the previous stub thus enabling versioning regardless of the document library having the versioning enabled. Actually enabling the versioning for this document library will complicate the things a little as we will need to handle more events.


The Stubbing Service:

A Web Service implementing the following methods:

Create Stub:

Parameters: file stream

Create a hash for the file and checks if the file is already in the external store.(this will handle duplicates). If the file exists then a reference count will need to be incremented and the existing stub will be cloned and returned. If the file does not exists then the file gets stored into the external storage and  a new stub get created and returned.

Update Stub:

Parameters: new file stream, old stub

Will get the file referenced into the old stub. We will create a binary diff file and store this into the external storage. Create and return a new stub and store the old stub inside it thus making a new version of the existing file.

Delete Stub:

Parameters: stub

We decrement the reference count for the specified stub and if the reference count is zero we can delete the file from the external storage.


Parameters: stub

We will get and return the file from external storage. If the file is a binary diff then we will recursively look for the parent and generate the real file before returning.


Most likely there is a need for a database to store hash codes, stubs counts and other metadata used by the stubbing service.


HTTP Module:

This is required to be able to replace the stub file with the real file when the user requests the file from SharePoint.

We are going to attach ourselves to EndRequest event using the Init method.

This will allow us to check for the content type and the request URL to determine if the user will receive a stub so we can call the web service to get it replaced by the real file.


Library Settings Pages:

This is where we can plug a page to configure the Stubbing web Service URL and the external storage location.


ECB Items:

Would be nice to have few items here that will give the user some control and information about the stub.

Un-Stub: Will replace the stub with the real file and mark that item to not be stub again.

Re-Stub: Will replace the file with the stub and clear the un-stub flag.

Versions: Will display a page with all existing versions for the current item and give the possibility to restore the item to a previous version. will be nice to give the possibility of deleting old versions too.