Using from CSharp

 

Executing Scripts

A script is a string according to the syntax detailed in the OQL section of this manual.

The main class is OpenCube.OC, and it has 2 methods to execute scripts:

IEnumerable<ScriptResult> Query( string Script, Action<string> Status = null )
static IEnumerable<ScriptResult> QueryExt( string Script, Action<string> Status = null )

 

The static version of the function allows executing scripts before data is loaded. This is useful for the commands ‘load’ and ‘open’. After data is loaded, the instance version of the function must be used.

The first parameter is the script text. This script can contain a series of commands.

The second optional parameter is used by the script engine to report progress on the execution of a command, for example the number of rows loaded during a load process.

The return value is an enumeration of the results of each command that the script is made of. The commands in the script will not be executed until the collection is traversed, and the execution will stop and continue on each iteration.

Depending of the type of result is which property in the script result class will be populated. The Command property and the Performance property will be always populated with the command executed, and the time span that it took to perform.

Errors during the execution are returned in the TextResult property and begin with the ‘Error: ‘ string.

using ( var oc = OC.Open( "cola.oc" ) ) {
    foreach ( ScriptResult item in oc.Query(          "total sum price * 0.8 [ coke, pepsi ] [ bottle, Package.can ];" 
    ) )  {
        MessageBox.Show( item.ToHTML( ) );
    }
    var PriceDietCan = (oc.Dimensions["cat"].Contexts["diet"] & oc.Dimensions["pkg"].Contexts["can"]).Aggregate( AggregationFunction.Sum, oc.Measures["price"] );
}

Contexts have overloaded the operators & (and), | (or), - (minus), ~ (not). It uses IsEquivalent to compare 2 contexts. They use Expand and Collapse to work with a context expanded (much faster, and required to test a particular row) or not. GetRowsID convert the bitmap index into an integer list of rows that belong to the context, Count returns the number of rows, and Index gives access to the underlying index. You can use Aggregate to properly aggregate a measure using within a context.

Creating a data provider

One can extend OuterCube to accept data from practically any source. To provide data for the load process one must implement the OuterCube.Importers.IImport interace:

public interface IImport {
    Import CreateImport( string File, FieldDefinition[ ] Fields );
}

 

The CreateImport method takes the parameters detailed in the OQL script, and should return an Import class.

The first parameter, the File, contains the string passed as the data source. Typically the file name, but the interpreter does not validate that this is a real file, so other data providers might use other values, for example a database provider might use a select statement instead.

The Fields parameter is an array of FieldDefinitions, and contains the list of fields specified in the script. This parameter can also be null if no field was specified.

public class FieldDefinition {
    public string Name { get; set; }
    public DataType FieldType { get; set; }
    public List<object> Parameters { get; set; }
    public string Description { get; set; }
    public bool NotMapped { get; set; }
    public List<ValueName> Values { get; set; }
}

 

  • The Name and Field Type are as they are described in the script.
  • The Parameters is an array with the list of values in between the parenthesis in the script.
  • The description is the string depicted in the script
  • The NotMapped will be set to true if the notmapped option was used in the script
  • And the Values is the list of values between the square brackets, with the Name, Value, and Description properties properly filled

 

The returned class has only 2 properties

public class Import {
    public IDataReader Source { get; set; }
    public ImportField[] Fields { get; set; }
}

 

The Source property is the opened data reader to the data source.

The fields contains the detail on how to understand the fields that are in the data reader. The ImportField class is very similar to the FieldDefinition class, but it allows to separate what comes from the script, from what the load process is expecting.

As example, we will use the open source project SPSS.NET interop library (http://spss.codeplex.com/) to create a data provider that will import from a spss file into OuterCube. The purpose of this example is to show how simple it is to create a data provider, not to show best programming practices

  • First, we create in Visual Studio a solution that will contain 1 DLL project. We will call this project OuterCube.Importers.spss
  • Then we will add a reference to the OuterCube DLL, and download and reference the spss.net library from Codeplex
  • We will add 1 class to the project called SPSSData Reader like this:
public class SPSSDataReader: IDataReader, IImport {
}
  • We will the right click on the interfaces names and direct Visual Studio to implement them. This will add all the methods, properties and so forth
  • Then we will add these 2 constructors
    SPSSDataReader( SpssDataDocument doc ) {
        m_doc = doc;
    }
    public SPSSDataReader( ) { }
  • And these fields
    private SpssDataDocument m_doc;
    private int m_curRowIndex = 0;
    private SpssCase m_curCase;
    private bool m_isClosed = false;
  • For the IImport interface this is the implementation
    public Import CreateImport( string File, FieldDefinition[ ] Fields ) {
        var ret = new Import( );
        var doc = Spss.SpssDataDocument.Open( File, Spss.SpssFileAccess.Read );
        var fields = new List<ImportField>( );
        foreach ( Spss.SpssVariable item in doc.Variables ) {
            var imp = new ImportField( item.Name, DataType.Dimension );
            imp.Description = item.Label;
            if ( item is Spss.SpssNumericVariable ) {
                var numeric = item as Spss.SpssNumericVariable;
                if ( numeric.ValueLabels.Count > 0 ) {
                    foreach ( var key in numeric.ValueLabels.Keys ) {
                        imp.ValueNames.Add( new ValueName { 
                            Name = numeric.ValueLabels[ key ], 
                            Value = Convert.ToInt32( key ), 
                            Description = numeric.ValueLabels[ key ] 
                        } );
                    }
                    imp.AreValueMapped = true;
                }
                else {
                    imp.FieldType = DataType.Measure;
                    imp.ValueNames = null;
                }
            }
            else if ( item is Spss.SpssStringVariable ) {
                var stringy = item as Spss.SpssStringVariable;
                imp.FieldType = DataType.Context;
                imp.AreValueMapped = false;
                imp.ValueNames = null;
            }
            else if ( item is Spss.SpssDateVariable ) {
                var datey = item as Spss.SpssDateVariable;
                imp.FieldType = DataType.Measure;
                imp.ValueNames = null;
            }
            else {
                imp = null;
            }
            if ( imp != null ) fields.Add( imp );
        }
        ret.Source = new SPSSDataReader( doc );
        ret.Fields = fields.ToArray( );
        return ret;
    }
  • The IDataReader interface only needs to implement a few members
        public void Close( ) { m_isClosed = true; }
        public int Depth { get { return 1; } }
        public bool IsClosed { get { return m_isClosed; } }
        public bool NextResult( ) { return false; }
        public bool Read( ) {
            if ( m_doc.Cases.Count > m_curRowIndex ) {
                m_curCase = m_doc.Cases[ m_curRowIndex ];
                m_curRowIndex++;
                return true;
            }
            return false;
        }
        public int RecordsAffected { get { return 0; } }
        public void Dispose( ) {
            Close( );
            GC.SuppressFinalize( this );
        }
        public int FieldCount { get { return m_doc.Variables.Count; } }
        public int GetOrdinal( string name ) { return m_doc.Variables.IndexOf( m_doc.Variables[ name ] );}
        public object GetValue( int i ) {
            object val = m_curCase[ m_doc.Variables[ i ].Name ];
            if ( null == val )
                return 0;
            return val;
        }
        public int GetValues( object[ ] values ) {
            values = new object[ m_doc.Variables.Count ];
            for ( int i = 0; i < m_doc.Variables.Count; i++ ) {
                SpssVariable v = m_doc.Variables[ i ];
                object curVal = m_curCase[ v.Name ];
                if ( null == curVal )
                    curVal = 0;
                values[ i ] = curVal;
            }
            return values.Length;
        }
        public object this[ string name ] {
            get {
                object val = m_curCase[ name ];
                if ( null == val )
                    return 0;
                return val;
            }
        }
        public object this[ int i ] {
            get {
                SpssVariable v = m_doc.Variables[ i ];
                object val = m_curCase[ v.Name ];
                if ( null == val )
                    return 0;
                return val;
            }
        }

And that is all.