Text Analytics of Movie Reviews using Azure Data Lake, Cognitive Services and Power BI (part 2 of 2)

Applicable Business Scenario

Marketing or data analysts who need to review sentiments and key phrases of a very large data set of consumer-based movie reviews.

Applied Technologies

  1. Azure Data Lake Store
  2. Azure Data Lake Analytics
    1. Cognitive Services
  3. Visual Studio with Azure Data Lake Analytics Tools
  4. Power BI Desktop & Power BI Service
  5. SharePoint Online site and preview of Power BI Web Part

Power BI Desktop

Use Power BI Desktop as the report authoring tool.

Data Source

Get Data from Azure Data Lake Store. Retrieve the output of the U-SQL script executed in Azure Data Lake Analytics in part 1 of this blog series.

TextAnalytics-6.png

Data Source to Azure Data Lake Store

textanalytics-7Point to the folder containing the .tsv (tab delimited) files which was the output of the U-SQL script execution.

Provide credentials to an account that has permissions to the Azure Data Lake Store. In this case, it was an Azure AD account.

Queries

Create a query for each .TSV file

textanalytics-8

textanalytics-9

Relationships

Define 1 to many relationship based on the ID of each movie review.

textanalytics-10

Reports/Visualization

Sentiment confidence value for each of the 2000 movie reviews

textanalytics-11

 

textanalytics-12

Publish

Click on ‘Publish’ to upload your report to Power BI Service in the MS cloud. You can view in http://app.powerbi.com with your Office 365 or Microsoft account.

textanalytics-13

SharePoint Online

If you want to publish and share this report to a wide audience via a SharePoint online site, you can leverage the new Power BI Web Part (currently preview as of Feb 2017). I have displayed this report in the latest SPO modern page experience as a publishing page. For each user that views the report must have a Power BI Pro license which is not free.

textanalytics-14

To configure, you need to create a modern publishing page displaying the power BI report via Power BI Web Part (preview).

textanalytics-15

Web Part in Edit Modetextanalytics-16

Enter the report link which you get from Power BI Service at http://app.powerbi.com

textanalytics-17

Options to further extend this solution

  • For the movie reviews .csv file, one can add date/time, movie being reviewed, genre, location and any other descriptive metadata. Thus, supporting more reporting and insights.
  • Overlay this data set against other data sent for correlation such as related news events, weather, popular movies trending, other movie reviews sources, etc. This is to find any cause and effect relationships for diagnostic insights – “Why is this happening?”.
  • To get data from internal network to Azure Data Lake or any Azure storage account, an option is to use the Data Management Gateway. This is installed within the internal network to allow to transfer files and data from other internal data sources with no to little corporate firewall changes.Move data between on-premises sources and the cloud with Data Management Gateway

Closing Remarks

Azure Cognitive Services built into Azure Data Lake Analytics is a suitable option for very high volume, unstructured and complex processing of data. This is such that the scalable computing power is needed. In addition, this priced in a pay-per use model in making it cost-effective in many scenarios. The agility of Azure services allows to experiment, iterate quickly and fail-fast in finding the right technical solution and applying the right techniques and approach. This article highlights how data can be ingested, analyzed/processed, modeled, visualized and then published to a business audience.

Text Analytics of Movie Reviews using Azure Data Lake, Cognitive Services and Power BI (part 1 of 2)

Applicable Business Scenario

Marketing or data analysts who need to review sentiments and key phrases of a very large data set of consumer-based movie reviews.

Applied Technologies

  1. Azure Data Lake Store
  2. Azure Data Lake Analytics
    1. Cognitive Services
  3. Visual Studio with Azure Data Lake Analytics Tools
  4. Power BI Desktop & Power BI Service
  5. SharePoint Online site and preview of Power BI Web Part

Azure Data Lake Store

Upload .csv file of 2000 movie reviews to a folder in Azure Data Lake Store

textanalytics-1

Azure Data Lake Analytics

Execute the following U-SQL script in either the Azure Portal > Azure Data Lake Analytics > Jobs > New Jobs or Visual Studio with Azure Data Lake Analytics Tools.

This script makes reference to the Cognitive Services assemblies. They come out of the box in the Azure Data Lake master database.

TextAnalytics-2.png

U-SQL Script

 The following script reads the moviereviews.csv file in Azure Data Lake Store and then analyzes for sentiment and key phrase extraction. Two .tsv files are produced, one with the sentiment and key phrases for each movie review and another for a list of each individual key phrase with a foreign key ID to the parent movie review.

 REFERENCE ASSEMBLY [TextCommon];
 REFERENCE ASSEMBLY [TextSentiment];
 REFERENCE ASSEMBLY [TextKeyPhrase];

@comments =
 EXTRACT
 Text string
 FROM @"/TextAnalysis/moviereviews.csv"
 USING Extractors.Csv();

@sentiment =
 PROCESS @comments
 PRODUCE
 Text,
 Sentiment string,
 Conf double
 READONLY
 Text
 USING new Cognition.Text.SentimentAnalyzer(true);

@keyPhrases =
 PROCESS @sentiment
 PRODUCE
 Text,
 Sentiment,
 Conf,
 KeyPhrase string
 READONLY
 Text,
 Sentiment,
 Conf
 USING new Cognition.Text.KeyPhraseExtractor();

@keyPhrases = SELECT *, ROW_NUMBER() OVER () AS RowNumber
 FROM @keyPhrases;
 OUTPUT @keyPhrases
 TO "/TextAnalysis/out/MovieReviews-keyPhrases.tsv"
 USING Outputters.Tsv();

// Split the key phrases.
 @kpsplits =
 SELECT RowNumber,
 Sentiment,
 Conf,
 T.KeyPhrase
 FROM @keyPhrases
 CROSS APPLY
 new Cognition.Text.Splitter("KeyPhrase") AS T(KeyPhrase);

OUTPUT @kpsplits
 TO "/TextAnalysis/out/MovieReviews-kpsplits.tsv"
 USING Outputters.Tsv();

Azure Portal > Azure Data Lake Analytics  U-SQL execution

Create a new job to execute a U-SQL script.
TextAnalytics-3.png

TextAnalytics-4.png

Visual Studio Option

You need the Azure Data Lake Tools for Visual Studio. Create a U-SQL project and paste the script. Submit the U-SQL script to the Azure Data Lake Analytics for execution. The following shows the successful job summary after the U-SQL script has been submitted.

TextAnalytics-5.png

Click here to Part 2 of 2 of this blog series

Azure Search: Pushing Content to an Index with the .NET SDK.

Blog Series

  1. Azure Search Overview
  2. Pushing Content To An Index with the .NET SDK

I hold the opinion that for a robust indexing strategy, you would likely end up writing a custom batch application between your desired data sources and your defined Azure Search index. The pull method currently only supports data sources that reside in specific Azure data stores (as of Feb 2017):

  • Azure SQL Database
  • SQL Server relational data on an Azure VM
  • Azure DocumentDB
  • Azure Blob storage, Table storage

I would assume many at this time would have desired content in websites databases and LOB applications outside of these Azure data stores.

Azure Search .NET SDK

This article Upload data to Azure Search using the .NET SDK gives great guidance and is what I used, but here’s my specific implementation approach.

To get started, first create a .NET project.Azure Search Pushing Content to an Index with the .NET SDK-1

Install from NuGet
Azure Search Pushing Content to an Index with the .NET SDK-2

My project with the Microsoft.Azure.Search library
Azure Search Pushing Content to an Index with the .NET SDK-3

 

To start coding, define your search index by creating a model class. I created a generic index schema. I will use this to define and create a new search index in the Azure Search Service. And to hold a list of records of movies as my searchable content.

[SerializePropertyNamesAsCamelCase ]
    public partial class IndexModel
    {
        [Key]
        [IsRetrievable(true)]
        public string Id { get; set; }

        [IsRetrievable(true), IsSearchable, IsFilterable, IsSortable]
        public string Title { get; set; }

        [IsRetrievable(true), IsSearchable]
        [Analyzer(AnalyzerName.AsString.EnLucene)]
        public string Content { get; set; }

        [IsFilterable, IsFacetable, IsSortable]
        public string ContentType { get; set; }

        [IsRetrievable(true)IsFilterable, IsSortable, IsSearchable]
        public string Url { get; set; }

        [IsRetrievable(true)IsFilterable, IsSortable]
        public DateTimeOffset? LastModifiedDate { get; set; }

        [IsRetrievable(true)IsFilterable, IsSortable]
        public string Author { get; set; }

    }

Next, I do 3 major steps in the Main method of the console app

  1. Create mock data, as if this data was retrieved from a data source.
  2. Create and index, if one not already exists, based on the index model class
  3. Update the index with new or updated content.
public static void Main(string[] args)
        {

            // Mock Data
            List<IndexModel> movies = new List<IndexModel>
            {
                new IndexModel()
                {
                    Id = "1000",
                    Title = "Star Wars",
                    Content = "Star Wars is an American epic space opera franchise, centered on a film series created by George Lucas. It depicts the adventures of various characters a long time ago in a galaxy far, far away",
                    LastModifiedDate = new DateTimeOffset(new DateTime(1977, 01, 01)),
                    Url = @"http://starwars.com"
                },
                new IndexModel()
                {
                    Id = "1001",
                    Title = "Indiana Jones",
                    Content = @"The Indiana Jones franchise is an American media franchise based on the adventures of Dr. Henry 'Indiana' Jones, a fictional archaeologist. It began in 1981 with the film Raiders of the Lost Ark",
                    LastModifiedDate = new DateTimeOffset(new DateTime(1981, 01, 01)),
                    Url = @"http://indianajones.com"
                },
                new IndexModel()
                {
                    Id = "1002",
                    Title = "Rocky",
                    Content = "Rocky Balboa (Sylvester Stallone), a small-time boxer from working-class Philadelphia, is arbitrarily chosen to take on the reigning world heavyweight champion, Apollo Creed (Carl Weathers), when the undefeated fighter's scheduled opponent is injured.",
                    LastModifiedDate = new DateTimeOffset(new DateTime(1976, 01, 01)),
                    Url = @"http://rocky.com"
                }
            };

            AzureSearch.CreateIndexIfNotExists<IndexModel>("movies");

            AzureSearch.UpdateIndex("movies", movies);

            Console.WriteLine("Enter any key to exist");
            Console.ReadKey();
        }

In the Azure Portal, you will see the outcomes

  • The ‘movies’ index has been created along with 3 documents as expected.
    Azure Search Pushing Content to an Index with the .NET SDK-4
  • I find that the document count value takes several minutes’ or more to be updated, but the indexing is immediate.
  • The fields has been defined along with its type and attributes based on the index model class
    Azure Search Pushing Content to an Index with the .NET SDK-5
  • To test the index, use the Search Explorer
    Azure Search Pushing Content to an Index with the .NET SDK-6

For further code snippet details of the following method calls. I made this method dynamic such that you pass in the Type of the index model as T. Then the  FieldBuilder.BuildForType() will build out the index schema.

AzureSearch.CreateIndexIfNotExists<IndexModel>("movies");

public static Boolean CreateIndexIfNotExists<T>(string indexName)
        {
            bool isIndexCreated = false;

                List<string> suggesterFieldnames = new List<string>() { "title" };

                var definition = new Index()
                {
                    Name = indexName,
                    Fields = FieldBuilder.BuildForType<T>(),
                    Suggesters = new List<Suggester>() {
                        new Suggester() {
                            Name = "Suggester",
                            SearchMode = SuggesterSearchMode.AnalyzingInfixMatching,
                            SourceFields = suggesterFieldnames
                        }
                   }
                };

                SearchServiceClient serviceClient = CreateSearchServiceClient();

                if (!serviceClient.Indexes.Exists(indexName))
                {
                    serviceClient.Indexes.Create(definition);
                    isIndexCreated = true;
                }
                else
                    isIndexCreated = false;
}

AzureSearch.UpdateIndex("movies", movies);
inner method call:
private static void UploadDocuments(ISearchIndexClient indexClient, List<IndexModel> contentItems)
        {
                var batch = IndexBatch.MergeOrUpload(contentItems);
                indexClient.Documents.Index(batch);
}

 

In conclusion, I generally recommend the push approach using the Azure Search .NET SDK as there are more control and flexibility. As I created a CreateIndex method, you should create a delete index method. This helps during development process as you iterate upon defining your index schema. Even in production scenarios, it can be appropriate to delete your index, re-create index with an updated schema and then re-index your content.


SharePoint 2016 Preview Large List Automatic Indexing with Deep Dive Analysis

The list view threshold (LVT) has been a pain point in some SharePoint sites that I have seen. The default setting in SharePoint 2016 Preview is still 5,000 as it is in 2013.

In cases where lists contain >5,000 items, users will eventually encounter the following message and the list is not displayed.

autoindex-1

According to Software boundaries and limits for SharePoint 2013 article the definition of the List view threshold (LVT) is:

“Specifies the maximum number of list or library items that a database operation, such as a query, can process at the same time outside the daily time window set by the administrator during which queries are unrestricted.”

To manage this constraint in SharePoint 2010 and 2013, read the article Manage lists and libraries with many items

In my opinion, many don’t quite understand what this really is and how to manage it properly. Many project stakeholders other than SharePoint SMEs understand this to be a limitation of how many items can be queried from the list. Rather, it is about the number of items or rows the SQL database has to ‘scan’ implicated by the list view’s query.

For example, let’s say we had a list of 30,000,000 items. Out of these items, we have 4,999 that have Country column value of Canada. List view threshold is set at 5,000.
There is a custom list view where a filter condition is Country = ‘Canada’.

Although it seems that this list view is doing a query for only 4,999 items, what is really happening at the SQL database table level is that all 30,000,000 items are being scanned.

A recommended solution is to index the column found in the list settings.

autoindex-2

Note that the indexing of columns is not a SQL based index such as a non-clustered index, but rather indexing through the NameValuePair_Latin1_General_CI_AS table in the respective content database.

The new Automatic Index Management setting

Now, in SharePoint 2016 Preview, there is a new list setting to automatically index found in List Settings > Advanced Settings. The default is set as ‘Yes’.

autoindex-3

The automatic indexing is supported by the ‘Large list automatic column index management job’.
Go to Central Administration > Monitoring > Review Job Definitions

sp16prev-autoIndexTimerJob

Large List Demo

Configuration

  • Central Administration > Select Web Application > General – Resource Throttling
    • List View Threshold for end users to be at 10,000
    • Auditors and administrators as 20,000
      Note: I doubled the default values just for general testing.

autoindex-5

  • Test User: Added ‘Roy Kim’ user account with only contribute permissions so that I can simulate the list view threshold without the special exceptions that a site collection administrator would have.
  • Custom list
    • Named ‘Large List’
    • Added site columns: Status, Gender, City, Province/State, Country,
    • Added 25,146 items with custom columns including Status. (via a PowerShell script)
    • Created View ‘By Not Started’ where Status equal to ‘Not Started’

autoindex-6

  • Large list automatic column index management job
    • Allow the timer job to run or manually run the job immediately.
  • Indexed Columns
    • Status column has become automatically indexed.
      autoindex-7

SharePoint 2016 Preview Install – First look

SharePoint 2016 Preview was released yesterday on Aug 24.

Download from here: https://www.microsoft.com/en-us/download/details.aspx?id=48712

Announcement: https://blogs.office.com/2015/08/24/announcing-availability-of-sharepoint-server-2016-it-preview-and-cloud-hybrid-search/

After installing, here are my comments as I walk through for noticeable changes:

  1. Similar to Office 365, there is a similar ‘App Launcher’ at the top left.

Newsfeed, OneDrive and Sites sit under your personal My Site.
http://<hostname>/my/personal/<username>/&#8230;

sp16preview-app launcher

2. Under List Settings, there is a new setting ‘Automatic Index Management’:
This may help with the list view threshold constraint where default  5,000 items for a list.

sp16preview-autoindexing

There is a new ‘Large list automatic column index management job’ timer job that may support this setting on the configured lists.

sp16prev-autoIndexTimerJob

After running this timer job, I went to the List Settings > Indexed columns and haven’t noticed any indexed columns. Perhaps the “indexing” can be seen elsewhere. I will have to continue to investigate.

3. Central Administration > Office 365 > Configure hybrid OneDrive and Site Features

SharePoint Hybrid Solutions Center: http://go.microsoft.com/fwlink/?LinkID=613711

sp16preview-O365hybridconfig

4. There are many more new timer jobs at a total of 228. In this screen shot, you may notice some that are new such as DeleteUnusedAbs, Document Changed Anti-virus Processing, DrainInlineStreams and Dump site information. Not sure what these do, but get ready to understand them and leverage accordingly.

sp16prev-newTimerJobs

To compare to a list of SharePoint 2013 timer jobs, read https://technet.microsoft.com/en-us/library/cc678870.aspx

5. “layouts/15” in URL and SharePoint Root Folders

For example, http://<hostname>/_layouts/15/start.aspx#/Shared%20Documents/Forms/AllItems.aspx

The URL contains “_layouts/15” rather than “_layouts/16” taking after the production major version number. Maybe that is why we are seeing the 2013 UI and perhaps in future there could be a change in the UI look.

Also in the the file system there is still the SharePoint 14 root folder (i.e. SharePoint 2010). Maybe this will go away in final release.

sp16prev-rootFolder

6. New Video thumbnails and playback.

I uploaded a 475mb video I grabbed from channel9.msdn.com into the Documents library. I happened to be running a PowerShell script adding 10,000 list items to a custom list and when playing the video in this thumbnail, the playback was a little choppy and slow. So even though this is a single server farm setup, one has to consider scalability and performance of playing videos. This video is surely stored in the content SQL database.

sp16prev-videoThumbNail

Speaking at the Calgary .NET User Group this Wed Dec. 14 ’11

I’m looking forward to speaking at the Calgary .NET User group on the www.calgary.ca site.
Wed Dec 14, 2011

Presenting features, applied technologies and approaches:

SharePoint 2010
.NET Framework, ASP .NET, Windows Communication Foundation
Google Search Appliance
Arc GIS
Webtrends 
Mobile Browsing 

Click the link for event details

http://www.dotnetcalgary.com/Events/tabid/61/ModuleID/381/ItemID/9/mctl/EventDetails/Default.aspx