I've recently been engaged in several events of potential interest to the geophysics community. In December 2014, a few of us from MathWorks attended the AGU meeting (American Geophysical Union).
And last week, I was invited to give a webinar to the IRIS (Incorporated Research Institutions for Seismology) community. The video for MATLAB for Analyzing & Visualizing Geospatial Data is here.
Next, I would like to pass on some timely information from Chad Trabant and Robert Weekly, from IRIS, about accessing seismological data via IRIS sites and services.
Traditionally, seismologists spend considerable time and effort collecting and organizing the data they wish to use for a study. In an effort to reduce this prep work, the IRIS Data Management Center (DMC) are constantly trying to remove barriers associated with data collection and formatting to deliver data to users that is easy for them to use and will give researchers more time to actually perform their research.
The DMC operates one of the largest repositories of openly available seismological time series data. Data managed by the DMC is retrieved from stations around the world, with international partners contributing data, in addition to data collected by U.S. institutions. These data are primarily from passive source sensors and are what most folks are familiar with when they see earthquake signals.
One of the primary charges of the DMC is to collect, archive, and distribute data to not only university researchers and teachers, but to anyone with internet access. A significant amount of work at the DMC is dedicated to building software packages that enable people to access the data stored in the DMC archives through a variety of webenabled clients.
For MATLAB users, DMC has developed irisFetch.m which provides seamless access to data stored at the DMC, as well as other data centers that implement standard web services.
This bit of MATLAB code operates as an abstraction layer to the DMC IRISWS Java library, which in turn fetches data and information from a data center via web services. Importantly, a user does not need to know anything about Java or networking: syntax for the functions provided by irisFetch.m should be familiar to users of the MATLAB language, and all data is returned as MATLAB structures.
The irisFetch module provides access to the following:
Help getting started using irisFetch.m is available in the online manual for irisFetch. Examples are also included below.
The core web services for delivering time series, related metadata and event (earthquake) information used by irisFetch have been standardized by the FDSN2. DMC has purposely designed irisFetch, and the Java library it uses, to work with any services compliant with the specification. This means that irisFetch may be used to retrieve data from any data center supporting these services, however, not all data centers have implemented every FDSNstandardized service, so functionality with irisFetch is limited to what any given center supports. A list of FDSN data centers and the services they support is here.
A short aside on seismic data nomenclature and norms
In the world of passive source seismology (SEED format specifically) time series are identified with four identifiers: network, station, location and channel. Most of the data available from the DMC is recorded continuously, sometimes for decades, so a time range must also be specified to form a minimal request. In the irisFetch functions, time series are selected using these identifiers. In most cases, seismological data is returned in units of "digital counts", which is often proportional to velocity or acceleration within a certain frequency range.
The irisFetch module comprises three separate methods that can used to retrieve station metadata, time series, and event information. They are irisFetch.Stations, irisFetch.Traces and irisFetch.Events, respectively. Below are some examples of how each of these components of irisFetch could be used for common data retrieval tasks.
Example 1
Plot time series for the 2011 TohokuOki earthquake. The example below uses irisFetch.Traces to retrieve time series data for global GSN stations to plot 1 hour of data (bandpass filtered from 1  5 Hz) following the TohokuOni earthquake of March 2011.
sta = {'MAJO','WMQ','KAPI','CMB'}; for i=1:length(sta) tohoku_tr(i) = irisFetch.Traces('_GSN',sta{i},'00','BHZ','20110311 15:10:00',... '20110311 16:10:00','verbose'); end
set filter parameters
bandfilt_freq1 = 1; bandfilt_freq2 = 5; bandfilt_order = 4;
colors = brighten(lines(numel(tohoku_tr)),0.33); figure(1) for i=1:numel(tohoku_tr) subplot(4,1,i) tr = tohoku_tr(i); data = (tr.data  mean(tr.data)) ./ tr.sensitivity; wn1 = bandfilt_freq1/tr.sampleRate; wn2 = bandfilt_freq2/tr.sampleRate; [f1,f2] = butter(bandfilt_order,[wn1 wn2],'stop'); data = filter(f1,f2,data); sampletimes = linspace(tr.startTime,tr.endTime,tr.sampleCount); plot(sampletimes, data, 'color', colors(2,:),'LineWidth',1); datetick; if i==1 title('TohokuOni Earthquake, 11March2011','FontSize',14) end if i~=numel(tohoku_tr) set(gca,'xticklabel',''); else xt = get(gca,'XTick'); xtl = get(gca,'XTickLabel'); xt = xt(1:2:end); xtl = xtl(1:2:end,:); set(gca,'XTick',xt,'XTickLabel',xtl); end set(gca,'YTick',[],'YTickLabel','','TickDir','out',... 'TickLength',[.005 .015],'FontSize',12); ylabel([tohoku_tr(i).network '.' tohoku_tr(i).station]) hold on; end
Example 2
Plot a map of Transportable Array (TA) network stations and recent earthquakes in Oklahoma. This example uses irisFetch.Events and irisFetch.Stations to retrieve hypocenter data and station metadata, respectively, to make a simple plot in MATLAB.
minlat = 35; maxlat = 37.5; minlon = 100; maxlon = 95; starttime = '20120101 00:00:00'; endtime = '20150201 00:00:00'; maxmag = 4.0; minmag = 0.0;
ok_ev = irisFetch.Events('boxcoordinates',[minlat,maxlat,minlon,maxlon],... 'maximumMagnitude',maxmag,'minimumMagnitude',minmag,... 'startTime',starttime,'endTime',endtime,... 'baseurl','http://service.iris.edu/fdsnws/event/1/');
ok_sta = irisFetch.Stations('station','TA','*','*','BH?',... 'boxcoordinates',[minlat,maxlat,minlon,maxlon]);
figure(2) plot([ok_ev.PreferredLongitude],[ok_ev.PreferredLatitude],'r.',... 'MarkerSize',10) hold on plot([ok_sta.Longitude],[ok_sta.Latitude],'b^','MarkerFaceColor','b') deg2in=6.0/diff(xlim); set(gca,'units','inches','pos',... [1.5 1 diff(xlim)*cosd(35)*deg2in diff(ylim)*deg2in],... 'FontSize',12,'TickDir','out') l = legend('Earthquake','TA station','Location','NorthEast'); set(l,'FontSize',12) xlabel('Longitude') ylabel('Latitude') title(['M<' num2str(maxmag) '  ' datestr(starttime(1:10),1) ' to ' ... datestr(endtime(1:10),1) '  ' ... num2str(numel(ok_ev)) ' total events'],'FontSize',14)
Example 3
Retrieving data from other datacenters. The versatility of the irisFetch methods allow users to access data from web services hosted by other datacenters, as long as they conform to the specification set by the FDSN (as discussed above). This example uses data obtained from the USGS fdsnevent service and some simple commands from the MATLAB Mapping Toolbox.
minmag_glob = 6; tstart = '20000101 00:00:00'; glob_ev = irisFetch.Events('startTime',tstart,'minimumMagnitude',minmag_glob,... 'baseurl','http://earthquake.usgs.gov/fdsnws/event/1/');
figure(3) % Here we use the MATLAB Mapping Toolbox h = worldmap('world'); geoshow(h,'landareas.shp','FaceColor',[.3 .3 .3]) l = geoshow(h,[glob_ev.PreferredLatitude],[glob_ev.PreferredLongitude],'DisplayType','point'); set(l,'Marker','.') mlabel('GLineWidth',.2,'FontSize',12,... 'MLabelLocation',90,'MLabelParallel','south'); title(['Global M>' num2str(minmag_glob) ' events since ' datestr(tstart(1:10),1)],... 'FontSize',14)
How do you get access to the seismological data you analyze – irisFetch – or something else? What are the stumbling blocks you encounter when accessing or working with seismological data? Let us know here.
Footnotes
Get
the MATLAB code
Published with MATLAB® R2014b
Single and double precision are combined to facilitate a triple precision accumulated inner product.
In my previous post on iterative refinement I showed the need for an inner product routine that is more accurate than double precision. This post is about such a function.
The example I am going to use is contrived so that the first and third terms in the inner product exactly cancel each other, leaving the much smaller second term to arise from the ashes.
format long e x = [1 1/3 1]' y = [1 3e9 1]'
x = 1.000000000000000e+00 3.333333333333333e01 1.000000000000000e+00 y = 1.000000000000000e+00 3.000000000000000e09 1.000000000000000e+00
Computing the inner product with a conventional MATLAB statement shows the intended difficulty.
dot2p = x'*y
dot2p = 1.000000082740371e09
The result should be 1.000000000000000e09. We're getting only about half of the significant digits correct.
Of course, the problem is that the second intermediate sum is
s2 = 1 + 1/3*3e9
s2 = 1.000000001000000e+00
That's OK in decimal, but not in binary. There is not enough room in one double precision floating point number to store the bits that are going to be needed when that leading one is cancelled by the third term in the sum.
I do not have a full blown triple precision arithmetic package by any means. I have just enough to do this one task. Here is the basic idea. A double precision floating point number has 14 hexadecimal digits in its fraction. I can use single and double to break a double into 6 high order hex digits and 8 low order hex digits, like this.
format hex
x = 1/3
xhi = double(single(x))
xlo = x  xhi
x = 3fd5555555555555 xhi = 3fd5555560000000 xlo = be45555556000000
Two doubles with more than half of their low order bits equal to zero can be multiplied together with no roundoff error. For example
pihi = double(single(pi)) pio3hi = xhi*pihi
pihi = 400921fb60000000 pio3hi = 3ff0c1524860a920
That trailing zero in pio3hi is an indication that the result is exact.
Additions are not exact, when the two numbers differ by several orders of magnitude. This fact will eventually be the limiting factor of our inner product routine.
My inner production routine is both accumulated, which means it uses extra precise arithmetic, and extended, which means an extra scalar is added to the result using the extra precision. You can download this function from here.
dbtype dot3p
1 function s = dot3p(x,y,s) 2 % DOT3P s = dot3p(x,y,s) Triple precision extended inner product. 3 % s = x'*y + s for vectors x and y and scalar s. 4 5 shi = double(single(s)); 6 slo = s  shi; 7 for k = 1:length(x) 8 xhi = double(single(x(k))); 9 xlo = x(k)  xhi; 10 yhi = double(single(y(k))); 11 ylo = y(k)  yhi; 12 tmp = xhi*yhi; 13 zhi = double(single(tmp)); 14 zlo = tmp  zhi + xhi*ylo + xlo*yhi + xlo*ylo; 15 16 tmp = shi + zhi; 17 del = tmp  shi  zhi; 18 shi = double(single(tmp)); 19 slo = tmp  shi + slo + zlo  del; 20 end 21 s = shi + slo;
Let's run my example with dot3p in the debugger and look at some intermediate results.
% dbstop 16 dot3p % dbstop 20 dot3p % dot3p(x,y,0)
The variables shi and slo carry the sum in triple precision. The first time through the loop there are no roundoff errors, and shi and slo are set to 1.0 and 0.0.
Let's look at the second pass through the loop. Halfway through, at statement 16.
K>> format hex
K>> xhi,xlo,yhi,ylo,tmp,zhi,zlo
xhi =
3fd5555560000000
xlo =
be45555556000000
yhi =
3e29c511e0000000
ylo =
bc7e2df108000000
tmp =
3e112e0bf341b0a0
zhi =
3e112e0c00000000
zlo =
bc97d9296b9a0d85
K>>
K>> format long e
K>> xhi,xlo,yhi,ylo,tmp,zhi,zlo
xhi =
3.333333432674408e01
xlo =
9.934107481068821e09
yhi =
3.000000026176508e09
ylo =
2.617650809219019e17
tmp =
1.000000038527825e09
zhi =
1.000000082740371e09
zlo =
8.274037106125165e17
We can see that xhi is the first six hex digits of 1/3 and xlo is the remaining digits. Similarly, yhi is the first six hex digits of 3e9 and ylo is the remaing digits. zhi is the first six hex digits of the xhi*yhi and zlo is a crucial correction term.
Stopping at the end of the second pass through the loop, at statement 20.
K>> format hex
K>> tmp, del, shi, slo
tmp =
3ff000000044b830
del =
0000000000000000
shi =
3ff0000000000000
slo =
3e112e0be826d694
K>>
K>> format long e,
K>> tmp, del, shi, slo
tmp =
1.000000001000000e+00
del =
0
shi =
1
slo =
9.999999999999999e10
tmp is 1.0 plus some of the bits of 1.0e10. del is zero because it is not needed in this example. It is involved when the terms vary over an even wider range. shi is exactly 1.0, which is the high order part of the evolving sum. And slo has become 1.0e10 to full double precison accuracy.
On the third time through the loop there will be no roundoff errors, shi will be completely cancelled, and slo will bear the full responsibility for providing the final answer.
Of course, this example is contrived and unusual. Ordinarily, we can expect some cancellation (otherwise there would be no need for an accumulated inner product), with the high order part losing at least a few digits and the low order part filling them in.
With a dot product in hand, it is easy to write the residual function. Here the extended feature is essential because we expect extreme, if not total, cancellation when the right hand side is subtracted from the dot product.
type residual3p
function r = residual3p(A,x,b) % RESIDUAL3p Triple precision residual, A*x  b. % r = residual3p(A,x,b) for matrix A, vectors x and b. m = size(A,1); r = zeros(m,1); for k = 1:m r(k) = dot3p(A(k,:),x,b(k)); end end
You can download this function from here.
Get
the MATLAB code
Published with MATLAB® R2014b
This afternoon my wife looked at me with an expression indicating she was convinced that I had finally gone completely and irrevocably crazy.
I think it was because I was carrying from room to room a flashlight, a candle, a lighter, and a chessboard scattered with colored craft sticks and puff balls.
"It's about that dress," I said.
Yes, that dress. #TheDress, as it is known on Twitter.
Three nights ago I idly opened Twitter to see what was up. The Internet was melting down over: a) llamas, and b) a picture of a dress. (I have nothing more to say here about llamas.)
This is the picture as it was originally posted on Tumblr:
Image credit: Tumblr user swiked
Some people see this dress as blue and black. Some see it as white and gold. Each group can't understand why the others see it differently.
By Friday afternoon, a myriad of explanations had popped up online and on various news outlets. Mostly, I found these initial attempts to be unsatisfying, although some better explanations have been published online since then.
Initially I didn't want to write a blog about this, because (as I often proclaim) color science makes my brain hurt. But I do know a little bit about how color scientists think, having worked with several, having read their papers and books, and having implemented their methods in software. So, here is my interpretation of this unusual visual phenomenon. It's in three parts:
Let's start with the influence of illumination. Here is a small portion of a picture that I took today.
"Sage green," my wife said.
And here's a portion of a different picture.
"That's yellow," came the answer.
The truth: these two colors are from the same location of the same object. Here are the two original images with the locations marked.
Image A
Image B
The chessboard and other objects in these pictures are the same. The difference between the two images is caused entirely by the different light sources used for each one. Just for fun here are the colors of the puff ball on the upper right from two different pictures. (Remember, these are pixels from the exact same spot on the same object!)
The color of the light arriving at the camera depends not only on the color of the object, but also on the nature of the illumination. As you can see in the colored patches above, changing the illumination can make a big difference. So you cannot definitively determine the dress color solely from close examination of the digital image pixels.
Look at Image A again. What is the color of the index card?
Most people would call it white. If you look at just a chunk of pixels from the center of the card, though, it looks like a shade of green.
People have an amazing ability to compensate automatically and unconsciously for different light sources in a scene that they are viewing. If you looked at the same banana under a bright fluorescent light, and in candle light, and in the shade outdoors under a cloudy sky, you would see the banana as having the same color, yellow, each time. That is true even though the color spectrum of the light coming from the banana is actually significantly different in these three scenarios. Our ability to do this is called color constancy.
Our ability to compensate accurately for illumination depends on having familiar things in the scene we are viewing. It can be the sky, the pavement, the walls, the grass, the skin tones on a face. Almost always there is something in the scene that anchors our brain's mechanism that compensates for the illumination.
Now we come back to the photo of the dress. The photo completely lacks recognizable cues to help us compensate for illumination. Our brain tries to do it anyway, though in spite of the lack of cues. The different reactions of people around the world suggest that there are two dramatically different solutions to the problem.
Consider the diagram below. It illustrates two scenes. In the first scene, on the left, a hypothetical blue and black dress is illuminated by one source. In the second scene, on the right, a hypothetical white and gold dress is illuminated by a second source. By coincidence, both scenes produce the same light at the camera, and therefore the two photographs look the same.
Note: I tweaked the diagram above at 02Mar2015 14:50 UTC to clarify its interpretation.
For some people, their visual system is jumping to illumination 1, leading them to see a blue and black dress. For others, their visual system is jumping to illumination 2, leading them to see a white and gold dress.
On Friday afternoon, I heard a report that many people in the white and gold camp, when they see a version of the photograph that includes a woman's face, immediately change their perception of the dress to blue and black. This perceptual shift persists even when they view the original photograph again. This demonstrates that the presence of an object with a familiar color can significantly alter our perception of colors throughout the scene.
If you zoom way in and examine the pixels on the dress in the original image, you'll see that they are blue.
So what kind of illumination scenario could cause people to perceive this as white? I asked Toshia McCabe, a MathWorks writer who knows more than I do about color and color systems. She thinks the dress picture was edited from another one in which the dress was underexposed. As a result, the "blue coincidentally looks like a white that is in shadow (but daylight balanced)." In other words, light from a white object in shadowed daylight can arrive as blue light to the camera. So if your eye sees blue pixels, but your brain jumps to the conclusion that the original scene was taken in shaded daylight, then your brain might decide you are looking at a white dress.
For the record, my wife sees a white and gold dress on a computer monitor, but she sees blue and brown when it is printed. I see blue and brown.
Enjoy!
Get
the MATLAB code
Published with MATLAB® R2014b
Sean's pick this week is ForEach by Jeremy Hughes.
My pick this week is a toolbox from our development organization for a foreach loop construct in MATLAB. A foreach loop allows you to iterate over a set or dimension without having to worry about indexing.
MATLAB actually already has this built in for single elements of numeric arrays:
% Display each element in x for thisfile = [1 pi 8 3] disp(thisfile) end
1 3.1416 8 3
This could've course been written with an index like below.
% Display each element in x with an index thisfile = [1 pi 8 3]; for ii = 1:numel(thisfile) disp(thisfile(ii)) end
1 3.1416 8 3
But what if we want to traverse every column or slice of a three dimensional array? Or each element in a cell array? Or each combination of two arrays? This is where Jeremy's foreach construct comes in handy since you won't need to worry about indexing.
Let's take a bunch of csv files and make copies of them with just the rows of data we care about. These files contain energy outage information for energy outages in America. Being from New England, I just want the records that correspond to the North East.
% Scrape the data directory for all csv files datadir = '.\Data\'; files = dir([datadir '*.csv']); filenames = {files.name}; disp(filenames)
Columns 1 through 4 '2002Data.csv' '2003Data.csv' '2004Data.csv' '2005Data.csv' Columns 5 through 8 '2006Data.csv' '2007Data.csv' '2008Data.csv' '2009Data.csv' Columns 9 through 12 '2010Data.csv' '2011Data.csv' '2012Data.csv' '2013Data.csv'
% Make a directory for output outputdir = 'NorthEastData'; mkdir(outputdir) % Loop over each filename, read, extract, write for thisfile = each(filenames) % Reach each file Temp = readtable([datadir thisfile]); Temp.Region = categorical(Temp.Region); % Convert to categorical % Identify and extract the northeast records northeast = Temp.Region == 'NorthEast'; Output = Temp(northeast,:); % Make new full file name [~,name,ext] = fileparts(thisfile); outputfile = fullfile(pwd,outputdir,[name '_NorthEast' ext]); % Write out writetable(Output,outputfile) end
Now we can check to make sure it did what we expect.
Personally, I see this being really useful for working with sets of files or images, like I have done above, or for needle in the haystack type problems where I want to find something in a bunch of combinations and stop as soon as I do.
And now for a challenge:
I used the informal interface to this toolbox, i.e. the each function, to traverse each file. There is also a formal interface that provides the ability to define your own iterators so that you can traverse objects of your own class or anything else. I will give some MathWorks goodies to anyone who will write their own eachFile iterator that I could use above to recreate what I have.
Please post it below.
The development team is actively looking for feedback on this, so please leave them a comment with any thoughts you have.
Give it a try and let us know what you think here.
Get
the MATLAB code
Published with MATLAB® R2014b
Learning by watching videos has become standard technique for students of everything from piano to surgery. And increasingly people are learning to code by watching other people code.
On this site we have lots of clever people solving MATLAB problems on Cody. I would love to see somebody post a video to YouTube that shows them actually in the process of solving a Cody problem. Or maybe just describing afterthefact how they did solve it. Or even analyzing, sportscommentator style, the various techniques used to solve a particular problem. If you make a “How I Solved It” video for Cody leave a link here and tell us about it!
And just to put my money where my mouth is, I have provided a humble example here: How I Solved It: MATLAB Pangrams (Cody Problem 27).
]]>Thank you,
Doug
In an earlier post, today's guest blogger Toshi Takeuchi gave us an introduction to Market Basket Analysis. This week, he will discuss how to scale this technique using MapReduce to deal with larger data.
R2014b was a major update to MATLAB core functionality and one of the several new exciting features to me was MapReduce. I was primarily interested in Market Basket Analysis to analyze clickstream data and I knew web usage data extracted from web server logs would be very large.
MapReduce was developed to process massive datasets in a distributed parallel computation, and it is one of the key technologies that enabled Big Data analytics.
MapReduce is made up of mappers and reducers. Mappers read data from file storage one chunk at a time and parse data to generate keyvalue pairs. Then reducers will receive those keyvalue pairs by key and process values associated with those values. Therefore what you need to do is:
Though mappers and reducers perform fairly simple operations, you can chain them together to handle more complex operations. In the Apriori algorithm, the most timeconsuming steps are the generation of transactions and 1itemset data. So let's use MapReduce to address these bottlenecks.
We start by setting up datastore. In this example, we are going to use a fairly small CSV file on a local drive, but datastore can
It is important to start out with a small subset of the data to prototype and test your algorithm before you use it on really big data. MATLAB makes it really easy to prototype your algorithm on your local machine and then scale it up to the cluster or cloud later.
Set up source datastore.
source_ds = datastore('sampleData.csv','ReadVariableNames',false,... 'NumHeaderLines',1,'VariableNames',{'VisitorCookieID','Page','Visits'}); source_ds.SelectedVariableNames = {'VisitorCookieID','Page'};
The data is a list of Visitor Cookie IDs and the pages associated with the IDs.
Let's review the data.
disp(preview(source_ds)) reset(source_ds)
VisitorCookieID Page __________________________________ ___________________________________________________________________________ '3821337fdad7a6132253b10b602c4616' '/matlabcentral/answers/' '3821337fdad7a6132253b10b602c4616' '/matlabcentral/answers/152931howtotranslatethefollowingcodefrom...' '3821337fdad7a6132253b10b602c4616' '/matlabcentral/answers/153109numbergreaterthanthelargestpositive...' '3821337fdad7a6132253b10b602c4616' '/help/matlab/ref/realmax.html' '3821337fdad7a6132253b10b602c4616' '/matlabcentral/answers/153201howtoevaluatelargefactorials' '3821337fdad7a6132253b10b602c4616' '/help/matlab/matlab_prog/floatingpointnumbers.html' '3821337fdad7a6132253b10b602c4616' '/matlabcentral/answers/contributors/5560534tigo/questions' '3821337fdad7a6132253b10b602c4616' '/matlabcentral/newsreader/view_thread/297433'
If you think of a visitor as a shopper, you can think of pages visited as items in the shopping cart (a transaction). A visitor can visit the same page multiple times, but such repeated visits are not factored in itemset counting.
One of the important considerations in designing a MapReduce algorithm is to minimize the number of keys you generate. For this reason, a good starting point would be to group items by transactions by using VisitorCookieID as the key, because we have a finite set of visitors but they can visit a larger number of pages.
type transactionMapper
function transactionMapper(data, info, intermKVStore) tid = data.VisitorCookieID; item = data.Page; % get unique tids u_tid = unique(tid); % iterate over the data chunk to map multiple items to a unique tid items = cell(size(u_tid)); for i = 1:length(tid) row = ismember(u_tid,tid{i}); items{row}{end+1} = item{i}; end % use addmulti to speed up the process addmulti(intermKVStore, u_tid, items) end
The mapper will then pass this keyvalue pair to the reducer.
type transactionReducer
function transactionReducer(key, value, outKVStore) items = {}; % concatenate items from different mappers for the same key while hasnext(value) items = [items, getnext(value)]; end % eliminate duplicates u_items = unique(items); % save the data to a keyvalue store add(outKVStore, key, u_items); end
The reducer receives keyvalue pairs by key, and merges multiple cell arrays with the same key into a single cell array and removes any duplicates. We can then store the result in a new datastore.
Now let's run this job.
Group items by transaction.
transaction_ds = mapreduce(source_ds, @transactionMapper, @transactionReducer); transactions = readall(transaction_ds); disp(transactions(1:5,:))
Parallel mapreduce execution on the local cluster: ******************************** * MAPREDUCE PROGRESS * ******************************** Map 0% Reduce 0% Map 100% Reduce 50% Map 100% Reduce 100% Key Value __________________________________ ___________ '00927996b5566e6347e55463dc7c6273' {1x16 cell} '01c5f379d2e475d727eec2c711fb83f8' {1x11 cell} '0717615157c000c4955c21b958b7866d' {1x1 cell} '0f29597bff2a611013a4333f027b4f1a' {1x12 cell} '13027f74a9c7402bf7b8f699b557815f' {1x12 cell}
We now know that all items in a transaction are unique. All we need to do is to count the number of times an item appears in the transactions to see how many transactions contained that item. The pages were stored as a value in the previous step, so we simply need to retrieve just those and count their contents.
type oneItemsetMapper
function oneItemsetMapper(data, info, intermKVStore) % keys are in a cell array keys = data.Value{1}; % create a cell array of 1's values = num2cell(ones(size(keys))); % save the data to a keyvalue store addmulti(intermKVStore, keys, values) end
The mapper passes each instance of a page as 1.
type oneItemsetReducer
function oneItemsetReducer(key, value, outKVStore) count = 0; while hasnext(value) count = count + getnext(value); end add(outKVStore, key, count); end
The reducer then collects the counts of a given page and sums them. Now let's run this job and read the completed results into memory.
% Get 1itemsets oneItemset_ds = mapreduce(transaction_ds, @oneItemsetMapper, @oneItemsetReducer); % Read the result into memory oneItemsets = readall(oneItemset_ds); disp(oneItemsets(655:659,:))
Parallel mapreduce execution on the local cluster: ******************************** * MAPREDUCE PROGRESS * ******************************** Map 0% Reduce 0% Map 50% Reduce 0% Map 100% Reduce 50% Map 100% Reduce 100% Key Value __________________________________________________________________ _____ '/loren/' [2] '/loren/2006/07/05/whenisanumericresultnotanumber/' [1] '/loren/2009/10/02/usingparforloopsgettingupandrunning/' [1] '/loren/2011/11/14/generatingccodefromyourmatlabalgorithms/' [1] '/loren/2012/02/06/usinggpusinmatlab/' [1]
Now we are ready to feed the transactions and oneItemsets data to findFreqItemsets, which also accepts a table of 1itemsets as an optional input. The code is available if you go to the earlier post.
Let's generate frequent itemsets based on a minimum support threshold 0.02, which means we see the same pattern among at least 2% of the visitors.
minSup = 0.02; fprintf('Processing dataset with minimum support threshold = %.2f\n...\n', minSup) [F,S,items] = findFreqItemsets(transactions,minSup,oneItemsets); fprintf('Frequent Itemsets Found: %d\n', sum(arrayfun(@(x) size(x.freqSets,1), F))) fprintf('Max Level : k = %d\n', length(F)) fprintf('Number of Support Data : %d\n\n', length(S))
Processing dataset with minimum support threshold = 0.02 ... Frequent Itemsets Found: 151 Max Level : k = 4 Number of Support Data : 4107
This step is no different from the example in the earlier post. Let's use a minimum confidence threshold 0.6.
minConf = 0.6; rules = generateRules(F,S,minConf); fprintf('Minimum Confidence : %.2f\n', minConf) fprintf('Rules Found : %d\n\n', length(rules))
Minimum Confidence : 0.60 Rules Found : 99
Now we can visualize the rules we generated. This sample dataset is very tiny, and the number of rules are limited.
conf = arrayfun(@(x) x.Conf, rules); % get conf as a vector lift = arrayfun(@(x) x.Lift, rules); % get lift as a vector sup = arrayfun(@(x) x.Sup, rules); % get support as a vector colormap cool scatter(sup,conf,lift*5, lift, 'filled') xlabel('Support'); ylabel('Confidence') t = colorbar('peer',gca); set(get(t,'ylabel'),'String', 'Lift'); title('Sample Data')
Unfortunately we can’t share this sample data for you to try, but it is easy to adapt this code using publicly available dataset such as Anonymous Microsoft Web Data Data Set from UCI Machine Learning Repository. This data is preprossed from raw clickstream logs, and we need to process it back to the raw format we used in sample data. You can see the code used to process this dataset below.
When you apply Market Basket Analysis to this dataset, you should get something like this.
Rule #1 {Windows 95} => {Windows Family of OSs} Conf: 0.91, Lift: 6.46
Rule #2 {Windows95 Support} => {isapi} Conf: 0.84, Lift: 5.16
Rule #3 {SiteBuilder Network Membership} => {Internet Site Construction for Developers} Conf: 0.80, Lift: 8.17
Rule #5 {Knowledge Base, isapi} => {Support Desktop} Conf: 0.71, Lift: 5.21
Rule #18 {Windows Family of OSs, isapi} => {Windows95 Support} Conf: 0.64, Lift: 11.66
The sample code also includes MapReduce steps on a reduced dataset. I reduced the dataset because my MapReduce code is not optimal for this dataset and runs extremely slowly. My code assumes that there are more pages than visitors, and we have more visitors than pages in this Microsoft dataset.
If you would like to use MapReduce on the full Microsoft dataset, I strongly recommend that you wrote your own MapReduce code that is more optimized for this dataset.
Now we prototyped the algorithm and tested it. I can see if I am going to get the insight I am looking for with this process. Once I am happy with it, I still need to figure out where to store the larger data on a cluster or cloud. It's more of a business challenge than a technical one, and I am still in the middle of that.
Programming MapReduce in MATLAB is very straight forward and easy. The source data was local in this example, but you can also use it for a larger dataset that sits across multiple locations, such as HDFS files. You can prototype your MapReduce algorithm locally, then change the configuration to scale up to a larger dataset.
In this example, MapReduce was used only for the initial data processing, and the rest was still done in memory. As long as the processed result can fit in the memory, we can use this approach.
However, if the processed data gets larger, then we need to make more use of MapReduce in other steps in the Apriori alogrithm.
The key is to adapt the algorithm to parallel processing. In the current incarnation, you have a single thread in the candidate pruning stage.
Instead, we subdivide the dataset into several chunks and complete the entire process through rule generation in each. We need to adjust the minimum support to account for the reduction of transaction counts in subsets if we do so. Then we can combine the final output. This may not provide the same result as the single thread process, but it should be fairly close.
Do you see ways in which you might leverage MapReduce to handle larger data analysis projects? Let us know here.
Here is the code I used to process the Microsoft dataset.
type processMicrosoftWebData.m
%% Let's load the dataset % The UCI dataset is not in the raw log format, but in a special % nontabular ASCII format. We need to process the data into a format we % can use. This is not a typical process you do when you work with actual % raw clickstream data. % clear everything clearvars; close all; clc % the URL of the source data filename = 'anonymousmsweb.data'; % if te file doesn't exist, download of the UCI website if exist(filename,'file') == 0 url = 'http://archive.ics.uci.edu/ml/machinelearningdatabases/anonymous/anonymousmsweb.data'; filepath = websave('anonymousmsweb.data',url); end % accumulators aid = []; % attribute id vroot = {}; % vroot  page name url = {}; % url of the page vid = []; % visitor id visits = []; % aid of vroots visited by a visitor VisitorCookieID = {}; % same as vid but as string % open the file fid = fopen(filename); % read the first line tline = fgetl(fid); % if the line contains a string while ischar(tline) % if the line contains attribute if strcmp(tline(1),'A') c = textscan(tline,'%*s%d%*d%q%q','Delimiter',','); aid = [aid;c{1}]; vroot = [vroot;c{2}]; url = [url;c{3}]; % if the line contains case elseif strcmp(tline(1),'C') user = textscan(tline,'%*s%*q%d','Delimiter',','); % if the line contains vote elseif strcmp(tline(1),'V') vid = [vid;user{1}]; vote = textscan(tline,'%*s%d%*d','Delimiter',','); visits = [visits; vote{1}]; VisitorCookieID = [VisitorCookieID;['id',num2str(user{1})]]; end tline = fgetl(fid); end % close the file fclose(fid); % sort attributes by aid [~,idx] = sort(aid); aid = aid(idx); vroot = vroot(idx); url = url(idx); % populate Page with vroot based on aid Page = cell(size(visits)); for i = 1:length(aid) Page(visits == aid(i)) = vroot(i); end % create table and write it to disk if it doesn't exist transactions = table(VisitorCookieID,Page); if exist('msweb.transactions.csv','file') == 0 % just keep the first 318 rows to use as sample data transactions(319:end,:) = []; % comment out to keep the whole thing writetable(transactions,'msweb.transactions.csv') end %% Association Rule Mining without MapReduce % Since we already have all necessary pieces of data in the workspace, we % might as well do the analysis now. % get unique uid uniq_uid = unique(vid); % create a cell array of visits that contains vroots visited transactions = cell(size(uniq_uid)); for i = 1:length(uniq_uid) transactions(i) = {visits(vid == uniq_uid(i))'}; end % find frequent itemsets of vroots from the visits minSup = 0.02; fprintf('Processing dataset with minimum support threshold = %.2f\n...\n', minSup) [F,S,items] = findFreqItemsets(transactions,minSup); fprintf('Frequent Itemsets Found: %d\n', sum(arrayfun(@(x) size(x.freqSets,1), F))) fprintf('Max Level : k = %d\n', length(F)) fprintf('Number of Support Data : %d\n\n', length(S)) % generate rules minConf = 0.6; rules = generateRules(F,S,minConf); fprintf('Minimum Confidence : %.2f\n', minConf) fprintf('Rules Found : %d\n\n', length(rules)) % plot the rules conf = arrayfun(@(x) x.Conf, rules); % get conf as a vector lift = arrayfun(@(x) x.Lift, rules); % get lift as a vector sup = arrayfun(@(x) x.Sup, rules); % get support as a vector colormap cool scatter(sup,conf,lift*5, lift, 'filled') xlabel('Support'); ylabel('Confidence') t = colorbar('peer',gca); set(get(t,'ylabel'),'String', 'Lift'); title('Microsoft Web Data') % display the selected rules selected = [1,2,3,5,18]; for i = 1:length(selected) fprintf('Rule #%d\n', selected(i)) lenAnte = length(rules(selected(i)).Ante); if lenAnte == 1 fprintf('{%s} => {%s}\nConf: %.2f, Lift: %.2f\n\n',... vroot{rules(selected(i)).Ante(1)},vroot{rules(selected(i)).Conseq},... rules(selected(i)).Conf,rules(selected(i)).Lift) elseif lenAnte == 2 fprintf('{%s, %s} => {%s}\nConf: %.2f, Lift: %.2f\n\n',... vroot{rules(selected(i)).Ante(1)},vroot{rules(selected(i)).Ante(2)},... vroot{rules(selected(i)).Conseq},rules(selected(i)).Conf,rules(selected(i)).Lift) end end %% MapReduce % My MapReduce code is not well suited for this dataset and runs extremely % slow if you use the whole dataset. I will just use just a subset for % demonstration purpose. If you want to try this on the full dataset, you % should write your own MapReduce code optimized for it. % clear everything clearvars; close all; clc % set up source datastore source_ds = datastore('msweb.transactions.csv'); disp(preview(source_ds)) % step 1: Group items by transaction transaction_ds = mapreduce(source_ds, @transactionMapper, @transactionReducer); transactions = readall(transaction_ds); % step 2: Generate 1itemsets oneItemset_ds = mapreduce(transaction_ds, @oneItemsetMapper, @oneItemsetReducer); oneItemsets = readall(oneItemset_ds);
Get
the MATLAB code
Published with MATLAB® R2014b
Jiro's pick this week is Decimate Polygon by Anton Semechko.
Just a little over a year ago, I wrote a blog post on downsampling polygons. I highlighted Peter's Polygon Simplification. His entry helped me out in one of my side projects I was working on at the time. Soon after, I received an email from a File Exchange contributor letting me know about another entry for accomplishing the task of downsampling polygons. I appreciate people letting me know about these entries I've overlooked. When I select a file to highlight in my posts I try to do a search on similar entries, but I don't always catch them all.
Let's see how DecimatePoly works. I'm going to use the same image I used for my previous post.
im = imread('amoeba.png');
imshow(im)
As before, I will convert this to black and white and trace out the boundaries using bwboundaries from the Image Processing Toolbox. Then I will overlay the largest boundary onto the original image.
im2 = im2bw(im); boundaries = bwboundaries(~im2,'noholes') largest = boundaries{1}; hold on plot(largest(:,2),largest(:,1),'r','LineWidth',2) hold off
boundaries = [1696x2 double] [ 30x2 double] [ 33x2 double] [ 111x2 double] [ 31x2 double] [ 34x2 double]
Currently, the boundary has 1696 vertices.
size(largest)
ans = 1696 2
With DecimatePoly, I can downsample this polygon by specifying either the maximum acceptable offset from the original boundary or the fraction of the total number of points to retain. For example, if I want to ensure the downsampled boundary to be off by maximum of .1 pixels,
b1 = DecimatePoly(largest, [.1 1]);
# of verts perimeter area in 1695 1948.91 27992.50 out 926 1948.91 27992.50  change 45.37% 0.00% 0.00 %
I can see that I have about 45% fewer points while maintaining the same perimeter and area. If I loosen my constraint,
b2 = DecimatePoly(largest, [1 1]);
# of verts perimeter area in 1695 1948.91 27992.50 out 473 1863.07 28046.00  change 72.09% 4.40% 0.19 %
I have over 70% fewer points while not sacrificing much change in perimeter or area.
figure subplot(1,2,1) plot(largest(:,2),largest(:,1)) axis image ij subplot(1,2,2) plot(b2(:,2),b2(:,1)) axis image ij
So why is having fewer points preferred? Well, Anton has included a nice demo with his entry that shows improved performance in inpolygon tests. This is one of the plots from the demo.
He looked at how different number of polygon points affected the computation time and the accuracy of inpolygon tests. The horizontal axis (Percent simplification) represents the degree of downsampling. The lower the percent simplification, the greater the downsampling. The Dice coefficient (blue) shows how similar the results are between the downsampled polygon and the full polygon. The closer the value is to 1, the more similar the results. The right axis (red) shows the relative speed up of the computation time compared to the full polygon. We can see that even at around 10% simplification (downsampled by 90%), we are able to get over 98% similarity and 10x speed up.
Comments
Do you have a need to downsample polygons? What is your use case? Let us know here or leave a comment for Anton.
Get
the MATLAB code
Published with MATLAB® R2014b
Introduction
In this blog post I will highlight how Simulink Project can be used to manage conflicts that occur when two users submit changes to the same Simulink model using a source control tool.
A conflict arises when two users make changes to the same file. The first user to submit their changes to the source control system experiences no problem, but the second user’s changes are considered to be in conflict with the first user’s changes; so the source control tool prevents the second user from committing their file until they have merged their changes into the first user’s version of the file.
Environment set up
In order to show how conflict resolution can be performed with Simulink Project I have set up a modified version of our Airframe demo.
I created two working copies called demo1 and demo2 which are checked out from an SVN repository created by running the above demo. User 1 will use the folder demo1 and user 2 will use the folder demo 2.
Here is a screenshot of what the project looks like when user 1 opens their copy of the project in the working copy demo1.
Binary case
I have set up my SVN config file to consider slx files binary, see our documentation about how to set up SVN source control.
The project contains a model called vertical_channel.slx which looks like this:
User 1 modifies this model to change the sign of both the sum block and the feedback gain, so that it now looks like this:
User 1 then commits his changes using the modified files view.
This trivial change was easy to make and commit to the SVN repository, but as we are about to see when user 2 makes a change to his copy of this file, submitting the change is going to be much more difficult.
User 2 hasn’t performed any update so his project looks the same as user 1’s project before he made his change.
User 2 opens the vertical channel model and adds a scope to his model so that it looks like this:
When user 2 tries to commit they get the following error:
This error tells him that he doesn’t have the latest version of vertical_channel.slx so he can’t commit his changes to it. To get the latest version he performs an update. This causes a problem because the changes in his working copy conflict with the changes made between the version of the file that has and the latest version in the repository submitted by user 1. User 2’s change is now in conflict with the change submitted to the repository by user 1. The project marks this file as conflicted user 2’s project.
Because we registered .slx files as binary with SVN, the SVN update did not attempt change the content of the file vertical_channel.slx, it still contains user 2’s addition of the scope to the original model.
User 2 can use Simulink Project to resolve the conflict as follows:
When user 1 performs an update he now gets the merged model in his working copy:
Now it's your turn
Do you often need to merge Simulink models? Let us know by leaving a comment here.
]]>Iterative refinement is a technique introduced by Wilkinson for reducing the roundoff error produced during the solution of simultaneous linear equations. Higher precision arithmetic is required for the calculation of the residuals.
The first research paper I ever published, in 1967, was titled "Iterative Refinement in Floating Point". It was an analysis of a technique introduced by J. H. Wilkinson almost 20 years earlier for a post processing cleanup that reduces the roundoff error generated when Gaussian elimination or a similar process is used to solve a system of simultaneous linear equations, $Ax = b$.
The iterative refinement algorithm is easily described.
Almost all of the work is in the first step, which can be thought of as producing triangular factors such as $L$ and $U$ so that $A = LU$ while solving $LUx = b$. For a matrix of order $n$ the computational complexity of this step is $O(n^3)$. Saving the factorization reduces the complexity of the remaining refinement steps to something much less, only $O(n^2)$.
By the early 1960s we had learned from Wilkinson that if a system of simultaneous linear equations is solved by a process like Gaussian elimination or Cholesky factorization, the residual will always be order roundoff error, relative to the matrix and the computed solution, even if the system is nearly singular. This is both good news and bad news.
The good news is that $Ax$ is always close to $b$. This says that the computed solution always comes close to solving the equations, even though $x$ might not be close to the theoretical correct solution, $A^{1}b$. The pitcher always puts the ball where the batter can hit it, even though that might not be in the strike zone.
The bad news is that it is delicate to compute the residual accurately. If the same precision arithmetic is used to compute the residual that was used to solve the system, the roundoff error involved in computing $r$ will be almost comparable to the effect of the roundoff error present in $x$, so the correction has little chance of being useful.
We need to use higher precision arithmetic while computing the residual. Each component of the residual involves a sum of products and then one final subtraction. The exact product of two numbers with a certain precision has twice that precision. With the computers that Wilkinson used, and that I used early in my career, we had access to the full results of multiplications. We were able to write inner product routines that accumulated the sums with twice the working precision.
But it is not easy to write the accumulated inner product routine in modern, portable, machine independent software. It was not easy in Fortran. It is not easy in MATLAB. The original specification of the BLAS, the Basic Linear Algebra Subroutines, was deliberately silent on the matter. Subsequent proposals for extensions of the BLAS have introduced mixed precision, but these extensions have not been widely adopted. So, the key tool we need to implement iterative refinement has not been available.
In my next blog post, I will describe two MATLAB functions residual3p and dot3p. They provide enough of what I call "triple precision" arithmetic to produce an accumulated inner product. It's a hack, but it works well enough to illustrate iterative refinement.
My example involves perhaps the world's most famous badly conditioned matrix, the Hilbert matrix. I won't begin with the Hilbert matrix itself because its elements are the fractions
$$ h_{i,j} = \frac{1}{i+j1} $$
Many of these fractions can't be represented exactly in floating point, so I would have roundoff error before even getting started with the elimination. Fortunately, the elements of the inverse Hilbert matrix are integers that can be readily generated. There is a function invhilb in the MATLAB elmat directory. I'll choose the 8by8. The elements are large, so I need a custom format to display the matrix.
n = 8;
A = invhilb(n);
disp(sprintf('%8.0f %11.0f %11.0f %11.0f %11.0f %11.0f %11.0f %11.0f \n',A))
64 2016 20160 92400 221760 288288 192192 51480 2016 84672 952560 4656960 11642400 15567552 10594584 2882880 20160 952560 11430720 58212000 149688000 204324120 141261120 38918880 92400 4656960 58212000 304920000 800415000 1109908800 776936160 216216000 221760 11642400 149688000 800415000 2134440000 2996753760 2118916800 594594000 288288 15567552 204324120 1109908800 2996753760 4249941696 3030051024 856215360 192192 10594584 141261120 776936160 2118916800 3030051024 2175421248 618377760 51480 2882880 38918880 216216000 594594000 856215360 618377760 176679360
I am going to try to compute the third column of the inverse of this inverse, which is a column of the Hilbert matrix itself. The right hand side b is a column of the identity matrix. I am hoping to get the fractions x = [1/3 1/4 ... 1/9 1/10].
b = zeros(n,1); b(3) = 1 format compact format long e x = A\b
b = 0 0 1 0 0 0 0 0 x = 3.333333289789291e01 2.499999961540004e01 1.999999965600743e01 1.666666635570877e01 1.428571400209730e01 1.249999973935827e01 1.111111087003338e01 9.999999775774569e02
Since I know what x is supposed to look like, I can just eyeball the output and see that I have only about half of the digits correct.
(I used backslash to solve the system. My matrix happens to be symmetric and positive definite, so the elimination algorithm involves the Cholesky factorization. But I'm going to be extravagant, ignore the complexity considerations, and not save the triangular factor.)
Here's my first crack at the residual. I won't do anything special about the precision this time; I'll just use an ordinary MATLAB statement.
r = A*x  b
r = 9.094947017729282e13 1.746229827404022e10 4.656612873077393e10 1.862645149230957e08 1.490116119384766e08 2.980232238769531e08 3.725290298461914e08 1.862645149230957e08
It's important to look at the size of the residual relative to the sizes of the matrix and the solution.
relative_residual = norm(r)/(norm(A)*norm(x))
relative_residual = 1.147025634044834e17
The elements in this computed residual are the right order of magnitude, that is roundoff error, but, since I didn't use any extra precision, they are not accurate enough to provide a useful correction.
d = A\r no_help = x  d
d = 1.069920014936507e08 9.567761339244008e09 8.614990592214338e09 7.819389121717837e09 7.150997084009303e09 6.584022612326096e09 6.098163254532801e09 5.677765952511023e09 no_help = 3.333333396781292e01 2.500000057217617e01 2.000000051750649e01 1.666666713764768e01 1.428571471719701e01 1.250000039776053e01 1.111111147984970e01 1.000000034355116e01
Now I will use residual3p, which I intend to describe in my next blog and which employs "triple precision" accumulation of the inner products required for an accurate residual.
r = residual3p(A,x,b)
r = 4.045319634826683e12 1.523381421009162e10 9.919851606809971e10 2.429459300401504e09 8.826383179894037e09 2.260851861279889e08 1.332933052822227e08 6.369845095832716e09
Superficially, this residual looks a lot like the previous one, but it's a lot more accurate. The resulting correction works very well.
d = A\r x = x  d
d = 4.354403560053519e09 3.845999016894392e09 3.439925156187715e09 3.109578484769736e09 2.836169428940436e09 2.606416977917484e09 2.410777025186154e09 2.242253997222573e09 x = 3.333333333333327e01 2.499999999999994e01 1.999999999999995e01 1.666666666666662e01 1.428571428571425e01 1.249999999999996e01 1.111111111111108e01 9.999999999999969e02
I've now got about 14 digits correct. That's almost, but not quite, full double precision accuracy.
Try it again.
r = residual3p(A,x,b)
r = 3.652078639504452e12 1.943885052924088e10 2.523682596233812e09 1.359348900109580e08 3.645651958095186e08 5.142027248439263e08 3.649529745075597e08 1.027348206505963e08
Notice that the residual r is just about the same size as the previous one, even though the solution x is several orders of magnitude more accurate.
d = A\r nice_try = x  d
d = 2.733263259661321e16 2.786131033681204e16 2.611667424188757e16 2.527960139656094e16 2.492795072717761e16 2.196895809665418e16 2.110200076421557e16 1.983918218604762e16 nice_try = 3.333333333333324e01 2.499999999999991e01 1.999999999999992e01 1.666666666666660e01 1.428571428571422e01 1.249999999999994e01 1.111111111111106e01 9.999999999999949e02
The correction changed the solution, but didn't make it appreciably more accurate. I've reached the limits of my triple precision inner product.
Bring in the big guns, the Symbolic Math Toolbox, to compute a very accurate residual. It is important to use either the 'f' or the 'd' option when converting x to a sym so that the conversion is done exactly.
% r = double(A*sym(x,'d')  b) r = double(A*sym(x,'f')  b)
r = 3.652078639504452e12 1.943885052924088e10 2.523682707256114e09 1.359348633656055e08 3.645651780459502e08 5.142027803550775e08 3.649529478622071e08 1.027348206505963e08
The correction just nudges the last two digits.
d = A\r x = x  d
d = 6.846375178532078e16 5.828670755614817e16 5.162536953886886e16 4.533410583885285e16 3.965082139594863e16 3.747002624289523e16 3.392348053271079e16 3.136379972458518e16 x = 3.333333333333333e01 2.500000000000000e01 2.000000000000000e01 1.666666666666667e01 1.428571428571429e01 1.250000000000000e01 1.111111111111111e01 1.000000000000000e01
Now, with a very accurate residual, the elements I get in x are the floating point numbers closest to the fractions in the Hilbert matrix. That's the best I can do.
There is more to say about iterative refinement. See Nick Higham's SIAM book and, especially, the 2006 TOMS paper by Demmel, Kahan and their Berkeley colleagues. A preprint is available from Kahan's web site.
Cleve Moler, Iterative Refinement in Floating Point, Journal of the ACM 14, pp. 316321, 1967, <http://dl.acm.org/citation.cfm?doid=321386.321394>
Nicholas Higham, Accuracy and Stability of Numerical Algorithms (2nd ed), SIAM, 2002, <http://epubs.siam.org/doi/book/10.1137/1.9780898718027>
James Demmel, Yozo Hida, William Kahan, Xiaoye S. Li, Sonil Mukherjee, and E. Jason Riedy, Error Bounds from ExtraPrecise Iterative Refinement, ACM Transactions on Mathematical Software 32, pp. 325351, 2006, <http://www.cs.berkeley.edu/~wkahan/p325demmel.pdf>
Get
the MATLAB code
Published with MATLAB® R2014b
Ccode SFunction 
MATLAB Function 
MATLAB Function (Debugging Disabled) 

2.15 sec 
2.67 sec 
2.30 sec 
~ 
+24% 
+6.8% 
Since R2012b, it has been possible to comment out blocks in Simulink. As soon as this feature was released, it immediately became popular among all the users I know.
Soon after, in R2013b, the possibility to commentthrough blocks with same numbers of inputs and outputs was introduced. Also very useful!
Today I just realized that in R2014b, it is also possible to comment out Stateflow objects.
Commenting Stateflow Objects in a Chart
Commenting out a Stateflow object is as easy as it should be. Select one or more objects and hit Ctrl+Shift+X
Adding text to the commented objects
One useful feature associated with commented Stateflow objects is the possibility to add notes, for example to explain why the object is commented. Click on the % sign on the bottom left corner of the object and a dialog will popup.
The text entered in this dialog will be displayed when you hover the mouse over the object.
Now it's your turn
In my opinion, the ability to comment out Simulink blocks and Stateflow objects, has most significantly affected my model editing workflow in many years.
Have you also included the commenting in your workflows? Let us know by leaving a comment here.
]]>Today I'd like to introduce a guest blogger, Andrea Ho, who works for the MATLAB Documentation team here at MathWorks. Today, Andrea will discuss three new data types for representing dates and times in MATLAB in R2014b  datetime, duration, and calendarDuration.
Dates and times are frequently a critical part of data analysis. If they are part of your work, you've likely encountered some of the following limitations of date numbers, date vectors, and date strings:
Starting in R2014b, datetime, duration, and calendarDuration are data types dedicated to storing date and time data, and are easily distinguishable from numeric values. The three types differentiate between points in time, elapsed time in units of constant length (such as hours, minutes, and seconds), and elapsed time in units of flexible length (such as weeks and months). datetime, duration, and calendarDuration represent dates and times in a way that is easy to read, is suitable for computations, has much higher precision (up to nanosecond precision), and is more memoryefficient. With the new data types, you won't have to maintain or convert between different representations of the same date and time value. Work with datetime, duration, and calendarDuration much like you do with numeric types like double, using standard array operations.
Let's see the new data types in action.
The datetime data type represents absolute points in time such as January 1, 2015 at midnight. The datetime function can create a datetime variable from the current date and time:
t0 = datetime('now')
t0 = 16Jan2015 16:24:30
or from numeric inputs:
t0 = datetime(2015,1,1,2,3,4)
t0 = 01Jan2015 02:03:04
whos t0
Name Size Bytes Class Attributes t0 1x1 121 datetime
If you are interested only in the time portion of t0, you can display just that.
t0.Format = 'HH:mm'
t0 = 02:03
Changing the format of a datetime variable affects how it is displayed, but not its numeric value. Though you can no longer see the year, month and day values, that information is still stored in the variable. Let's turn the display of the date portion of t0 back on.
t0.Format = 'ddMMMyyyy HH:mm'
t0 = 01Jan2015 02:03
By changing the display format, you can view the value of a datetime variable with up to nanosecond precision!
t0.Format = 'ddMMMyyyy HH:mm:ss.SSSSSSSSS'
t0 = 01Jan2015 02:03:04.000000000
If you are a frequent user of date string formats, you'll recognize that some of the format specifiers for datetime are different from those for the datestr, datenum, and datevec functions.. The new format syntax allows for more options and is now consistent with the Unicode Locale Data Markup Language (LDML) standard.
The duration data type represents the exact difference between two specific points in time. Suppose you drove overnight, leaving your home on January 1, 2015 at 11 PM, and arriving at your destination at 4:30 AM the next day. How much time did you spend on the road? (Ignore the possibility of crossing into another time zone; we'll see how datetime manages time zones in a future post.)
t0 = datetime(2015,1,1,23,0,0); t1 = datetime(2015,1,2,4,30,0); dt = t1t0
dt = 05:30:00
dt is a duration variable displayed as hours:minutes:seconds. Like datetime, a duration behaves mostly like a numeric value but displays in a format that shows time well. Let's change the format to view dt as a number of minutes.
dt.Format = 'm'
dt = 330 mins
We can perform common arithmetic operations on date and time variables. Even though dt is currently displayed as a number of minutes, we can add or subtract a duration in any unit, including hours and seconds.
dt2 = dt  hours(1) + seconds(5)
dt2 = 270.08 mins
Suppose you opened a bank account on January 1, 2014. The account pays interest monthly on the last day of each month. Let's find the first date on which interest was paid. Is there a solution that does not depend on the account opening date? The dateshift function is useful for shifting dates to the start or end of a unit of time. In our case, we will shift the account opening date to the end of the month.
t0 = datetime(2014,1,1); t1 = dateshift(t0,'end','month')
t1 = 31Jan2014
Suppose our records indicate that we received interest payments at noon. We can add this information to t1 by modifying its Hour property.
t1.Hour = 12;
Now let's create a sequence of dates to represent the next 12 payment dates. What happens if we add increments of 30 days to t1?
t = t1 + days(0:30:335)'
t = 31Jan2014 12:00:00 02Mar2014 12:00:00 01Apr2014 12:00:00 01May2014 12:00:00 31May2014 12:00:00 30Jun2014 12:00:00 30Jul2014 12:00:00 29Aug2014 12:00:00 28Sep2014 12:00:00 28Oct2014 12:00:00 27Nov2014 12:00:00 27Dec2014 12:00:00
Yikes, most of these dates don't fall on the last day of the month. A duration of 30 days, with each day being exactly 24 hours long, was not the correct step size to use. The correct step size depends on the month, since some months have 30 days, while others have 31 (or perhaps 28, or 29). What we really want to do is add a series of calendar months to t1.
t = t1 + calmonths(0:11)'
t = 31Jan2014 28Feb2014 31Mar2014 30Apr2014 31May2014 30Jun2014 31Jul2014 31Aug2014 30Sep2014 31Oct2014 30Nov2014 31Dec2014
The calmonths function creates a calendarDuration array containing calendar months. A calendarDuration value represents a flexible or nonconstant amount of time such as 1 month. There is no way to know exactly how many days, hours, minutes, or seconds there are in 1 month without knowing which month of the year I am referring to. The length of a calendar month is unknown until you relate it to a specific point in time.
How many days are between each of the payment dates?
dt = caldiff(t,'days')
dt = 28d 31d 30d 31d 30d 31d 31d 30d 31d 30d 31d
Each incremental calendar month represents a different number of days.
Our bank sent us a file named interest.txt that contains interest amounts for each month of the year. We can import this data interactively using the Import Tool.
Alternatively, we can use the readtable and textscan functions to read the file programmatically. Use the %D specifier to read a date or time string and specify the string format within curly braces.
T = readtable('interest.txt','Format','%{dd MMMM yyyy HH:mm}D %f')
T = Date Interest _______________________ ________ 31 January 2014 12:00 1.93 28 February 2014 12:00 4.28 31 March 2014 12:00 4.82 30 April 2014 12:00 1.23 31 May 2014 12:00 5.89 30 June 2014 12:00 2.26 31 July 2014 12:00 3.84 31 August 2014 12:00 5.82 30 September 2014 12:00 2.51 31 October 2014 12:00 2.99 30 November 2014 12:00 6.17 31 December 2014 12:00 2.65
Even though the text file contains strings, the values in the Date column of the table are not strings. They are datetime values.
If we compare our payment times with the bank's data, we'll see that they are the same.
isequal(t,T.Date)
ans = 1
Now let's calculate the cumulative interest at the end of each month and plot the data. When using plot, working with datetime is much easier than working with serial date numbers.
c_payments = cumsum(T.Interest); figure plot(T.Date,c_payments) xlabel('Date') ylabel('Cummulative Interest Payments')
The date axis is automatically labeled with date strings that are easy to understand. You no longer have to call datetick to do this! If we want to change the format of the dates, we can specify the desired format of the axis ticks in the call to plot.
figure plot(T.Date,c_payments,'DatetimeTickFormat','MMMM') xlabel('Date') ylabel('Cummulative Interest Payments') ax = gca; ax.XTickLabelRotation = 15;
To create charts containing date and time values using other plotting functions, you still need to work with serial date numbers for the time being. We recognize that this is inconvenient and we are planning improvements. You can learn more here about how to plot datetime and duration values.
Notice that in our analysis of travel times and interest payments, there was no need to use serial date numbers, date vectors, or strings at all. datetime, duration, and calendarDuration allow you to manipulate dates and times quickly and more intuitively. The new data types are intended as a replacement for datenum, datevec, and datestr. However, these older functions will continue to exist in MATLAB for the foreseeable future because we know that many people rely on them in existing code.
In a future post, I'll show how datetime, duration, and calendarDuration allow you to work with time zones, daylight saving time, and dates in languages other than English.
Do you see yourself using the new date and time data types with your data? Let us know by leaving a comment here.
Get
the MATLAB code
Published with MATLAB® R2014b
Sean's pick this week is CAD APPS by Larry Silverberg.
For this week's pick I'm going to combine my grad school life in structural engineering with the two recent major snow storms we've had in New England over the last few weeks. My house has seen about 42 inches of snow. So let's make a quick model of the snow load on my back deck.
I'm going to model this as a Frame. Although it's not an ideal frame by any means, the snow load is distributed on it and not just applied at the ends.
Larry's app allows me to draw the deck interactively (in meters, feet would be nice!)
First I needed to adjust the axes limits:
Next I need to draw the structure. There needs to be a node everywhere I apply a load because there doesn't seem to be a distributed load option. The deck is about 12ft wide with a 3ft cantilever resting about 8ft off the ground.
Then the hard part, figuring out what the load is in terms of Newtons per meter of deck. Being lazy, I only shoveled the outer half of the deck where it was easy to throw the snow off.
psf = 10; % Assume 10 pounds/square foot (lbf/ft^2) width = 2; % 24 inches on center joists = 2ft (ft) ftPerMeter = 3.28; % Feet per meter (ft/m) newtonsPerPound = 0.2248; % (N/m) weightPerMeter = width.*psf.*3.28.*newtonsPerPound; % ft*lbf/ft^2*ft/m*N/lbf = N/m disp(weightPerMeter)
14.7469
And now we simulate and look at the results!
My wood structural design books are somewhere in my attic at home so I have no idea what percentage of capacity the current snow load is using. But with another 6:10in of snow coming Monday, I probably oughtta clear off the rest of it. Sigh.
Give it a try and let us know what you think here or leave a comment for Larry.
Get
the MATLAB code
Published with MATLAB® R2015a
Introduction
Have you ever been in a supermarket checkout and wondered why you are in the slowest line? When I saw this article in Wired magazine, the title immediately grabbed my attention. As I read the article, I knew I had to try to put the theory to the test in SimEvents  a discreteevent simulation addon to Simulink that is suitable for modeling queuing systems.
Two Types of Queues
I started by building a simple model of a four register supermarket counter. I built two parallel versions of the setup  one which uses four separate queues and one with a single "serpentine" queue that feeds all four registers.
Let see how this is implemented in SimEvents.
To get the simulation going, I need to model random customers entering the checkout area. I use entities in SimEvents to represent customers and I can generate these entities at random time intervals following an exponential distribution. During generation I can also specify a random duration (also exponentially distributed) that a customer will take to be served at a register by assigning a special attribute to the corresponding entity. I chose the average service time (usually denoted by $1/\mu$) to be 2 mins and the average arrival time (usually denoted by $1/\lambda$) to be 1 min.
I then clone each customer I generate so that I can exercise the two different line configurations identically.
To model the case where four separate queues are feeding the four cash registers, I use a Switch that routes customers to the shortest of the four Queues. Each Queue then feeds a Server representing a checkout register. This Server holds the customer for the amount of time that was setup during generation.
To model the "serpentine" queue, I use a single Queue that feeds the four registers via a Switch that routes customers to a free register when one becomes available.
As you probably noticed, the two models use Timer blocks (the ones with the Stop Watch icons) that keep track of timing statistics for the entities/customers. This allows me to plot the average wait time of customers in the two configurations as well as the minimum and maximum wait times.
Simulation Results
Let us run the model and see what gets plotted.
As you can see, the configuration with the four queues on average results in longer wait times!
Also heartening is that if we compare the results from SimEvents to the theoretical M/M/c queue solution, we can see that this matches pretty well with a result of 2.173913 seconds for the "serpentine" model:
Note that there are some approximations (but no analytical results) for the average arrival time of the 'shortest queue strategy' as well, however I will not get into that for this blog.
Advanced Analysis
Once I had my model setup, also did some other fun experiments with it. I will not go into details in this post, but if you are interested you can download the model and associated files using the link at the end of this post and play with them. To pique your interest, I have summarized some of the interesting things I observed during my experiments:
I never thought I would say this, but, based on these experiments, perhaps it is time to give airlines and banks their due – they do setup a single line for transactions.
Now it's your turn
Download the final model and detailed description here, and let us know what you think by leaving a comment here.
]]>Steve Eddins has recently posted a series in his blog about colormaps. I want to add one more post. With release R2014b, we are retiring jet as the default colormap, after many years of faithful service. But did you ever wonder where jet originated, and how it came to be the default? And did you ever come across colormaps like pink and bone?
MATLAB first began to fully support color and 3D graphics with release 4.0 in 1992. Graphics hardware at the time had a limited amount of memory and so could only display a limited number of colors in any one figure. All color graphics went through color maps. Even the colors in full color photographs had to be quantized into a few bits.
Pseudocolor applies to the display of quantities, such as mathematical functions, that do not have any inherent color. Here the colormap interprets properties such as height, density or velocity as color. For example, the MathWorks logo, the Lshaped membrane, is the solution to a partial differential equation. As such, it does not have any color. A colormap, together with lighting and shading, makes it visually interesting.
The image behind the jet colormap comes from astrophysical fluid dynamics calculations made at NCSA, the National Center for Supercomputer Applications at the University of Illinois. I found the image on the Internet in the very early days of the Internet, possibly even before the availability of the Mosaic web browser, coincidentally also developed at NCSA in 1993. Astrophysicists Larry Smarr and Mike Norman were among the prominent researchers at NCSA.
The image is available in the MATLAB demos directory.
clear
load flujet
whos
snapnow
caption
Name Size Bytes Class Attributes X 400x300 960000 double caption 2x32 128 char map 64x3 1536 double
caption = Astrophysical jet from NCSA. One fluid collides with another.
The 400by300 array X contains values of fluid density that have been quantized to "flints" (floating point integers) in the range from to 1 to 64. These serve as indices into the 64by3 array map, which is our jet colormap. The doubles in map range from 0.0 to 1.0 and specify the brightness of the red, green, and blue components of each pixel.
image(X) colormap(jet) axis image axis off
Even though jet was originally given as an array of 64by3 numbers, it is actually a piecewise linear function of row index with breaks at 1/8, 3/8, 5/8, and 7/8 of the length. Jet is a variant of the "rainbow" color maps whose faults Steve describes.
rgbploter(@jet)
Here is another simple piecewise linear color map associated with black body radiation. It accurately describes the color obtained when a chisel made from highcarbon tool steel is heated with an oxyacetylene torch. See the web page of Don Dougan, Sculptor. I learned about this map many years ago from Neil Ostlund, a chemistry professor at the University of Waterloo who started Hypercube, Inc., a company that did molecular modeling on the Intel iPSC.
clf image(X) colormap(hot(64)) axis image axis off
rgbploter(@hot)
Shortly after we introduced colormaps, I was sitting at a terminal demonstrating them to someone and I realized that the maps were just matrices. I could do array operations on them. I typed something like this.
p = sqrt((2*gray(64) + hot(64))/3);
Taking the square root of a color map! Wow, that's pretty strange. But the result was useful. A better name than "pink" would be "sepia". Here's an old photo I've used many times, tinted to make it look old.
clf load gatlin image(X) colormap(pink(64)) axis image axis off
rgbploter(@pink)
This spine Xray was high resolution back in the day. To get a colormap that looks like old Xray file, interchange the blue and red columns of hot and then moderate it with gray.
b = (7*gray(64) + fliplr(hot(64)))/8; clf load spine image(X) colormap(bone(64)) axis image axis off
rgbploter(@bone)
We have colormaps named for all four of the seasons. They're all simple linear functions of the row index, like this spring.
The peaks function has proved to be remarkably popular for testing and demonstrating our 3D graphics. It's a modification of something I first saw back in my hypercube days from folks whose names I don't remember at Tektronix in Beaverton, Oregon. They were thinking of nearby Mount St. Helens.
clf peaks colormap(spring(64))
z = 3*(1x).^2.*exp((x.^2)  (y+1).^2) ...  10*(x/5  x.^3  y.^5).*exp(x.^2y.^2) ...  1/3*exp((x+1).^2  y.^2)
rgbploter(@spring)
Let's see how the old astrophysical jet looks with the new default colormap. For discussion of parula, including many reader's comments, see Steve's blog.
clf reset load flujet image(X) axis image axis off
rgbploter(@parula)
By the way, if you want to see a list of all the colormaps, type
help graph3d
Get
the MATLAB code
Published with MATLAB® R2014b
Jiro's pick this week is findInM by our very own Brett Shoelson.
If you know Brett, which you probably do if you've spent any time on MATLAB Central, you know about all the useful File Exchange entries he has contributed. It's no surprise he's ranked number 8. Aside from many of the entries related to image processing, he has uploaded a number of utility functions, and findInM is a musthave. The title of the File Exchange entry starts with "FAST, PROGRAMMATIC string searching...", and that's exactly what it is. It searches for a string of text in MATLAB files, but it's fast and it's programmatic. There are already a number of entries in File Exchange that searches for a text within files, including mfilegrep, mgrep, and grep. There is also an interactive way of searching from the toolstrip.
From the description of Brett's entry, findInM "can be much faster than any other method [he's] seen." The way Brett accomplishes this efficient search is by first creating an index of the MATLAB files in the folders. This step takes some time, but afterwards, the search happens on the index file and is very efficient.
In this example, I first created an index of my /toolbox folder (and its subfolders) of my MATLAB installation. Then searching for some text from over 20000 files took less than 10 seconds.
tic s = findInM('graph theory','toolbox') toc
SORTED BY DATE, MOST RECENT ON TOP: s = 'C:\Program Files\MATLAB\R2014b\toolbox\bioinfo\bioinfo\Contents.m' 'C:\Program Files\MATLAB\R2014b\toolbox\bioinfo\biodemos\graphtheorydemo.m' 'C:\Program Files\MATLAB\R2014b\toolbox\matlab\sparfun\Contents.m' 'C:\Program Files\MATLAB\R2014b\toolbox\matlab\sparfun\gplot.m' 'C:\Program Files\MATLAB\R2014b\toolbox\bioinfo\graphtheory\Contents.m' Elapsed time is 8.334972 seconds.
As a comparison, the interactive "Find Files" tool in MATLAB took over 5 minutes to do the same search.
Thanks for this great tool, Brett! I do have a couple of suggestions for improvement.
Comments
Give a try and let us know what you think here or leave a comment for Brett.
Get
the MATLAB code
Published with MATLAB® R2014b
You probably heard about the "beer and diapers" story as the often quoted example of what data mining can achieve. It goes like this: some supermarket placed beer next to diapers and got more business because they mined their sales data and found that men often bought those two items together.
Today's guest blogger, Toshi Takeuchi, who works in the web marketing team here at MathWorks, gives you a quick introduction to how such analysis is actually done, and will follow up with how you can scale it for larger dataset with MapReduce (new feature in R2014b) in a future post.
I have been interested in Market Basket Analysis not because I work at a supermarket but because it can be used for web usage pattern mining among many applications. Fortunately, I came across a good introduction in Chapter 6 (sample chapter available for free download) of Introduction to Data Mining. I obviously had to give it a try with MATLAB!
Let's start by loading the example dataset used in the textbook. Raw transaction data is turned into a binary matrix representation with 1 meaning the given item was in the given transaction, but otherwise 0.
transactions = {{'Bread','Milk'};... {'Bread','Diapers','Beer','Eggs'};... {'Milk','Diapers','Beer','Cola'};... {'Bread','Milk','Diapers','Beer'};... {'Bread','Milk','Diapers','Cola'}}; items = unique([transactions{:}]); T = zeros(size(transactions,1), length(items)); for i = 1:size(transactions,1) T(i,ismember(items,transactions{i,:})) = 1; end disp(array2table(T,'VariableNames',items,'RowNames',{'T1','T2','T3','T4','T5'}))
Beer Bread Cola Diapers Eggs Milk ____ _____ ____ _______ ____ ____ T1 0 1 0 0 0 1 T2 1 1 0 1 1 0 T3 1 0 1 1 0 1 T4 1 1 0 1 0 1 T5 0 1 1 1 0 1
Market Basket Analysis is also known as Association Analysis or Frequent Itemset Mining. Let’s go over some basic concepts: itemsets, support, confidence, and lift.
The column headers of the table shows all the items in this tiny dataset. A subset of those items in any combination is an itemset. An itemset can contain anywhere from zero items to all the items in the dataset.
0 items > null itemset (usually ignored) 1 item > 1itemset 2 items > 2itemset : k items > kitemset
This is, for example, a 3itemset:
{Beer, Diapers, Milk}
A transaction can contain multiple itemsets as its subsets. For example, an itemset {Bread, Diapers} is contained in T2, but not {Bread, Milk}.
Support count is the count of how often a given itemset appears across all the transactions. Frequency of its appearance is given by a metric called support.
Support = itemset support count / number of transactions
itemset = {'Beer','Diapers','Milk'} % get col indices of items in itemset cols = ismember(items,itemset); N = size(T,1); fprintf('Number of transactions = %d\n',N) % count rows that include all items supportCount = sum(all(T(:,cols),2)); fprintf('Support count for this itemset = %d\n',supportCount) itemSetSupport = supportCount/size(T,1); fprintf('Support = %.2f\n',itemSetSupport)
itemset = 'Beer' 'Diapers' 'Milk' Number of transactions = 5 Support count for this itemset = 2 Support = 0.40
Association rules are made up of antecedents (ante) and consequents (conseq) and take the following form:
{ante} => {conseq}
A kitemset where k > 1 can be randomly divided into ante and conseq to form such a rule. Here is an example:
ante = {'Diapers','Milk'}; % pick some subset itemset as ante % get items not in ante conseq = setdiff(itemset,ante); fprintf('Itemset: {%s, %s, %s}\n', itemset{:}) fprintf('Ante : {%s, %s}\n',ante{:}) fprintf('Conseq : {%s}\n',conseq{:}) fprintf('Rule : {%s, %s} => {%s}\n', ante{:},conseq{:})
Itemset: {Beer, Diapers, Milk} Ante : {Diapers, Milk} Conseq : {Beer} Rule : {Diapers, Milk} => {Beer}
You can think of this rule as "when diapers and milk appear in the same transaction, you see beer in the same transaction as well at some frequency". How do you determine which randomly generated association rules are actually meaningful?
The most basic measure of rules is confidence, which tells us how often a given rule applies within the transactions that contain the ante.
A given rule applies when all items from both antes and conseqs are present in a transaction, so it is the same thing as an itemset that contains the same items. We can use the support metric for the itemset to compute confidence of the rule.
Confidence = itemset support / ante support
cols = ismember(items,ante); anteCount = sum(all(T(:,cols),2)); fprintf('Support Count for Ante = %d\n',anteCount) anteSupport = anteCount/N; fprintf('Support for Ante = %.2f\n',anteSupport) confidence = itemSetSupport/anteSupport; fprintf('Confidence = %.2f (= itemset support / ante support)\n',confidence)
Support Count for Ante = 3 Support for Ante = 0.60 Confidence = 0.67 (= itemset support / ante support)
Another measure of rules is lift. It compares the probability of ante and conseq happening together independently to the observed frequency of such a combination, as a measure of interestingness. If lift is 1, then the probabilities of ante and conseq occurring together is independent and there is no special relationship. If it is larger than 1, then lift tells us how strongly ante and conseq are dependent to each other. We can use respective support metrics to make this comparison.
Lift = itemset support / (ante support x conseq support)
cols = ismember(items,conseq); conseqCount = sum(all(T(:,cols),2)); fprintf('Support Count for Conseq = %d\n',conseqCount) conseqSupport = conseqCount/N; fprintf('Support for Conseq = %.2f\n',conseqSupport) lift = itemSetSupport/(anteSupport*conseqSupport); fprintf('Lift = %.2f (= itemset support / (ante support x conseq support))\n',lift)
Support Count for Conseq = 3 Support for Conseq = 0.60 Lift = 1.11 (= itemset support / (ante support x conseq support))
Now that we know the basic concepts, we can define the goal of our analysis: finding association rules with a sufficient level of support (happens often enough) and confidence (association is strong). Lift can be a secondary measure to score the interestingness of the rules found.
In a brute force method, you would calculate the support and confidence of all possible itemset combinations, but that would be computationally expensive, because the number of candidates grows exponentially.
The apriori algorithm addresses this issue by generating candidates selectively. To get an intuition, think about the frequency of an itemset that contains some infrequent items. That itemset will never be more frequent than the least frequent item it contains. So if you construct your candidates by combining the frequent itemsets only, starting from 1itemset and continue to higher levels, then you avoid creating useless candidates.
Let's start with generating frequent itemsets and get their support measures. I wrote a couple of MATLAB functions to implement this algorithm.
minSup = 0.6; % minimum support threshold 0.6 [F,S] = findFreqItemsets(transactions,minSup); fprintf('Minimum Support : %.2f\n', minSup) fprintf('Frequent Itemsets Found: %d\n', sum(arrayfun(@(x) length(x.freqSets), F))) fprintf('Max Level Reached : %ditemsets\n', length(F)) fprintf('Number of Support Data : %d\n', length(S))
Minimum Support : 0.60 Frequent Itemsets Found: 8 Max Level Reached : 2itemsets Number of Support Data : 13
When we computed support for each itemset we evaluated, we stored the result in a Map object S. This is used for rule generation in order to avoid recalculating support as part of the confidence computation. You can now retrieve support for a given itemset with this object, by supplying the string representation of the itemset as the key. Let's try [2,4,6]:
itemset = [2,4,6];
fprintf('Support for the itemset {%s %s %s}: %.2f\n',items{itemset(:)},S(num2str(itemset)))
Support for the itemset {Bread Diapers Milk}: 0.40
This itemset clearly didn't meet the minimum support criteria of 0.60.
We saw earlier that you can generate rule candidates from frequent itemsets by splitting their contents into antes and conseqs, and computing their confidence.
If we generate every possible candidates by such brute force method, it will be very time consuming. Apriori algorithm is also used to generate rules selectively. Let's say that this rule has low confidence.
{Beer, Diapers} => {Milk}
Then any other rules generated from this itemset that contain {Milk} in rule consequent will have low confidence.
{Beer} => {Diapers, Milk} {Diapers} => {Beer, Milk}
Why? because support for those antes will be always greater than the initial ante {Beer, Diapers}, while the support for the itemset (hence also for the rule) remains the same, and confidence is based on the ratio of support between the rule and the ante.
We can take advantage of this intuition by first generating rules with only one item in conseq and drop those that don't meet the minimum criteria, and then merge those conseqs to generate rules with two items in conseq, and so forth.
Now we can generate association rules from the frequent itemsets we generated in the previous step.
minConf = 0.8; % minimum confidence threshold 0.8 rules = generateRules(F,S,minConf); fprintf('Minimum Confidence : %.2f\n', minConf) fprintf('Rules Found : %d\n\n', length(rules)) for i = 1:length(rules) disp([sprintf('{%s}',items{rules(i).Ante}),' => ',... sprintf('{%s}', items{rules(i).Conseq}),... sprintf(' Conf: %.2f ',rules(i).Conf),... sprintf('Lift: %.2f ',rules(i).Lift),... sprintf('Sup: %.2f',rules(i).Sup)]) end
Minimum Confidence : 0.80 Rules Found : 1 {Beer} => {Diapers} Conf: 1.00 Lift: 1.25 Sup: 0.60
With a minimum support of 0.6 and a minimum confidence 0.8, we found only one rule that clears those thresholds: {Beer} => {Diapers}. The confidence is 1.00, which means we see diapers in all transactions that include beer, and the lift is 1.25, so this rule is fairly interesting.
As a nonsupermarket use case of Market Basket Analysis, the textbook uses the 1984 Congressional Voting Records dataset from the UCI Machine Learning Repository. We can test our code against the result in the textbook.
For the purpose of Market Basket Analysis, you can think of individual members of congress as transactions and bills they voted for and their party affiliation as items.
getData
party handicapped_infants water_project_cost_sharing adoption_of_the_budget_resolution ____________ ___________________ __________________________ _________________________________ 'republican' 'n' 'y' 'n' 'republican' 'n' 'y' 'n' 'democrat' '?' 'y' 'y' 'democrat' 'n' 'y' 'y' 'democrat' 'y' 'y' 'y'
Now we generate frequent itemsets with a minimum support threshold of 0.3, but we need to use a slightly lower threshold because MATLAB uses double precision numbers.
minSup = 0.2988; [F,S] = findFreqItemsets(votes,minSup); fprintf('Minimum Support : %.2f\n', minSup) fprintf('Frequent Itemsets Found: %d\n', sum(arrayfun(@(x) length(x.freqSets), F))) fprintf('Max Level Reached : %ditemsets\n', length(F)) fprintf('Number of Support Data : %d\n', length(S))
Minimum Support : 0.30 Frequent Itemsets Found: 1026 Max Level Reached : 7itemsets Number of Support Data : 2530
Then we generate some rules.
minConf = 0.9; % minimum confidence threshold 0.9 rules = generateRules(F,S,minConf); fprintf('Minimum Confidence : %.2f\n', minConf) fprintf('Rules Found : %d\n\n', length(rules))
Minimum Confidence : 0.90 Rules Found : 2942
Finally, let's compare our results with the results from the textbook.
testAntes = [7,21,27;5,11,23;6,15,16;22,31,32]; testConseqs = [2,1,2,1]; testConf = [0.91,0.975,0.935,1]; % Get the rules with 1item conseq with party classification dem = rules(arrayfun(@(x) isequal(x.Conseq,1),rules)); rep = rules(arrayfun(@(x) isequal(x.Conseq,2),rules)); % Compare the results for i = 1:size(testAntes,1) rec = dem(arrayfun(@(x) isequal(x.Ante,testAntes(i,:)),dem)); rec = [rec,rep(arrayfun(@(x) isequal(x.Ante,testAntes(i,:)),rep))]; disp(['{', sprintf('%s, %s, %s',vars{rec.Ante}),'} => ',... sprintf('{%s}', vars{rec.Conseq})]) fprintf(' Conf: %.2f Lift: %.2f Sup: %.2f\n',rec.Conf,rec.Lift,rec.Sup) if isequal(rec.Conseq,testConseqs(i)); fprintf(' Correct! Expected Conf %.2f\n\n',testConf(i)) end end
{El Salvador = Yes, Budget Resolution = No, Mx Missile = No} => {Republican} Conf: 0.91 Lift: 2.36 Sup: 0.30 Correct! Expected Conf 0.91 {Budget Resolution = Yes, Mx Missile = Yes, El Salvador = No} => {Democrat} Conf: 0.97 Lift: 1.59 Sup: 0.36 Correct! Expected Conf 0.97 {Physician Fee Freeze = Yes, Right To Sue = Yes, Crime = Yes} => {Republican} Conf: 0.94 Lift: 2.42 Sup: 0.30 Correct! Expected Conf 0.94 {Physician Fee Freeze = No, Right To Sue = No, Crime = No} => {Democrat} Conf: 1.00 Lift: 1.63 Sup: 0.31 Correct! Expected Conf 1.00
MATLAB has strong data visualization capabilities. Let's visualize conf, lift and support.
conf = arrayfun(@(x) x.Conf, rules); % get conf as a vector lift = arrayfun(@(x) x.Lift, rules); % get lift as a vector sup = arrayfun(@(x) x.Sup, rules); % get support as a vector colormap cool scatter(sup,conf,lift*30, lift, 'filled') xlabel('Support'); ylabel('Confidence') t = colorbar('peer',gca); set(get(t,'ylabel'),'String', 'Lift'); title('1984 Congressional Voting Records')
You see that rules we generated match with those in the textbook. The plot of support, confidence and lift is useful to identify rules that are high support, high confidence (upper right region of the plot) and high lift (more magenta). If you type gname into MATLAB prompt, you can interactively identify the indices of those points, and hit Enter to end the interactive session.
What was Rule 51? Let's find out.
disp('Rule ID = 51') fprintf('{%s, %s} => {%s}, Conf: %.2f\n',... vars{rules(51).Ante(1)},vars{rules(51).Ante(2)},... vars{rules(51).Conseq},rules(51).Conf)
Rule ID = 51 {Budget Resolution = Yes, Physician Fee Freeze = No} => {Democrat}, Conf: 1.00
We used a small sample dataset in order to learn the basics of Market Basket Analysis. However, for real applications such as web usage pattern mining, we will typically be dealing with quite a large dataset. In a later post, I would like to figure out how to extend what we learned here with the help of MapReduce, introduced in R2014b. Any thoughts about your creative use of "Market Basket Analysis" with your data? Let us know here.
Here is the code used for this post.
findFreqItemsets() generates frequent itemsets using the Apriori method.
type findFreqItemsets.m
function [F,S,items] = findFreqItemsets(transactions,minSup,oneItemsets) %FINDFREQITEMSETS generates frequent itemsets using Apriori method % transactions is a nested cell array of transaction data or a table % with Value column that contains such cell array. Each row contains a % nested cell array of items in a single transaction. If supplied as a % table, it also needs Key column with a cell array of transaction ids. % % minSup is a scalar that represents the minimum support threshold. % Itemsets that meet this criteria are 'frequent' itemsets. % % oneItemSets is an optional table that list all single items that % appear in transactions along with their frequencies. Items are listed % in Key column and corresponding counts in Value column. % % items is a cell array of unique items. % % F is a structure array of frequent itemsets that meet that % criteria. Items are represented as indices of cell arrat items. % % S is a Map object that maps itemsets to their support values. Items % are represented as indices of cell arrat items. % % To learn more about the underlying alogrithm itself, please consult % with Ch6 of http://wwwusers.cs.umn.edu/~kumar/dmbook/index.php % check the number of input arguments narginchk(2, 3) if iscell(transactions) transactions = table(num2cell(1:length(transactions))',transactions,'VariableNames',{'Key','Value'}); end if nargin == 2 items = transactions.Value; [uniqItems,~,idx] = unique([items{:}]'); oneItemsets = table(uniqItems,num2cell(accumarray(idx,1)),'VariableNames',{'Key','Value'}); end % get the total number of transactions N = height(transactions); % sort the tables transactions = sortrows(transactions,'Key'); oneItemsets = sortrows(oneItemsets,'Key'); % get all frequent 1itemsets [F,S,items] = getFreqOneItemsets(oneItemsets,N,minSup); if isempty(F.freqSets) fprintf('No frequent itemset found at minSup = %.2f\n',minSup) return end % get all frequent kitemsets where k >= 2 k = 2; while true % generate candidate itemsets Ck = aprioriGen(F(k1).freqSets, k); % prune candidates below minimum support threshold [Fk, support] = pruneCandidates(transactions,Ck,N,items,minSup); % update support data; if empty, exit the loop if ~isempty(support) % create a map object to store suppor data mapS = containers.Map(); % convert vectors to chars for use as keys for i = 1:length(support) mapS(num2str(Ck(i,:))) = support(i); end % update S S = [S; mapS]; else break; end % store the frequent itemsets above minSup % if empty, exit the loop if ~isempty(Fk) F(k).freqSets = Fk; k = k + 1; else break; end end function [F1,S,C1]= getFreqOneItemsets(T,N,minSup) % GETFREQ1ITEMSETS geneates all frequent 1itemsets % 1items are generated from 1itemset table T and pruned with the % minimum support threshold minSup. % get 1itemset candidates C1 = T.Key; % get their count count = cell2mat(T.Value); % calculate support for all candidates sup = count ./N; % create a map object and store support data S = containers.Map(); for j = 1:length(C1) S(num2str(j)) = sup(j); end % prune candidates by minSup freqSet = find(sup >= minSup); % store result in a structure array F1 = struct('freqSets',freqSet); end function [Fk,support] = pruneCandidates(T,Ck,N,items,minSup) %PRUNECANDIDATES returns frequent kitemsets % Compute support for each candidndate in Ck by scanning % transactions table T to identify itemsets that clear minSup % threshold % calculate support count for all candidates support = zeros(size(Ck,1),1); % for each transaction for l = 1:N % get the item idices t = find(ismember(items, T.Value{l})); % increment the support count support(all(ismember(Ck,t),2)) = support(all(ismember(Ck,t),2)) + 1; end % calculate support support = support./N; % return the candidates that meet the criteria Fk = Ck(support >= minSup,:); end end
aprioriGen() is a helper function to findFreqItemsets() and generates candidate kitemsets using the Apriori algorithm.
type aprioriGen.m
function Ck = aprioriGen(freqSets, k) % APRIORIGEN generates candidate kitemsets using Apriori algorithm % This function implements F_k1 x F_k1 method, which merges two pairs % (k1)itemsets to generate new kitemsets if the first (k2) items are % identical between the pair. % % To learn more about the underlying alogrithm itself, please consult % with Ch6 of http://wwwusers.cs.umn.edu/~kumar/dmbook/index.php % generate candidate 2itemsets if k == 2 Ck = combnk(freqSets,2); else % generate candidate kitemsets (k > 2) Ck = []; numSets = size(freqSets,1); % generate two pairs of frequent itemsets to merge for i = 1:numSets for j = i+1:numSets % compare the first to k2 items pair1 = sort(freqSets(i,1:k2)); pair2 = sort(freqSets(j,1:k2)); % if they are the same, merge if isequal(pair1,pair2) Ck = [Ck; union(freqSets(i,:),freqSets(j,:))]; end end end end end
generateRules() returns the association rules that meets the minium confidence criteria. It also uses aprioriGen() to generate rule candidates.
type generateRules.m
function rules = generateRules(F,S,minConf) %GENERATERULES returns the association rules found from the frequent %itemsets based on the minimum confidence threshold minConf. %Association rules are expressed as {ante} => {conseq}. % F is a structure array of frequent itemsets % S is a Map object that maps itemsets to their support values % rules is a structure array of association rules that meet minConf % criteria, along with confidence, lift and support metrics. % % To learn more about the underlying alogrithm itself, please consult % with Ch6 of http://wwwusers.cs.umn.edu/~kumar/dmbook/index.php rules = struct('Ante',{},'Conseq',{},'Conf',{},'Lift',{},'Sup',{}); % iterate over itemset levels k where k >= 2 for k = 2:length(F) % iterate over kitemsets for n = 2:size(F(k).freqSets,1) % get one kitemset freqSet = F(k).freqSets(n,:); % get 1item consequents from the kitemset H1 = freqSet'; % if the itemset contains more than 3 items if k > 2 % go to ap_genrules() rules = ap_genrules(freqSet,H1,S,rules,minConf); else % go to pruneRules() [~,rules] = pruneRules(freqSet,H1,S,rules,minConf); end end end function rules = ap_genrules(Fk,H,S,rules,minConf) %AP_GENRULES generate candidate rules from rule consequent % Fk is a row vector representing a frequent itemset % H is a matrix that contains a rule consequents per row % S is a Map object that stores support values % rules is a structure array that stores the rules % minConf is the threshold to prune the rules m = size(H,2); % size of rule consequent % if frequent itemset is longer than consequent by more than 1 if length(Fk) > m+1 % prune 1item consequents by minConf if m == 1 [~,rules] = pruneRules(Fk,H,S,rules,minConf); end % use aprioriGen to generate longer consequents Hm1 = aprioriGen(H,m+1); % prune consequents by minConf [Hm1,rules] = pruneRules(Fk,Hm1,S,rules,minConf); % if we have consequents that meet the criteria, recurse until % we run out of such candidates if ~isempty(Hm1) rules = ap_genrules(Fk,Hm1,S,rules,minConf); end end end function [prunedH,rules] = pruneRules(Fk,H,S,rules,minConf) %PRUNERULES calculates confidence of given rules and drops rule %candiates that don't meet the minConf threshold % Fk is a row vector representing a frequent itemset % H is a matrix that contains a rule consequents per row % S is a Map object that stores support values % rules is a structure array that stores the rules % minConf is the threshold to prune the rules % prunedH is a matrix of consequents that met minConf % initialize a return variable prunedH = []; % iterate over the rows of H for i = 1:size(H,1); % a row in H is a conseq conseq = H(i,:); % ante = Fk  conseq ante = setdiff(Fk, conseq); % retrieve support for Fk supFk =S(num2str(Fk)); % retrieve support for ante supAnte =S(num2str(ante)); % retrieve support for conseq supConseq =S(num2str(conseq)); % calculate confidence conf = supFk / supAnte; % calculate lift lift = supFk/(supAnte*supConseq); % if the threshold is met if conf >= minConf % append the conseq to prunedH prunedH = [prunedH, conseq]; % generate a rule rule = struct('Ante',ante,'Conseq',conseq,... 'Conf',conf,'Lift',lift,'Sup',supFk); % append the rule if isempty(rules) rules = rule; else rules = [rules, rule]; end end end end end
getData is a script that fetches the congressional voting data using the RESTful API access function webread() that was introduced in R2014b, and preprocesses the data for Market Basket Analysis.
type getData.m
% Let's load the dataset clearvars; close all; clc % the URL of the source data url = 'https://archive.ics.uci.edu/ml/machinelearningdatabases/votingrecords/housevotes84.data'; % define the function handle to process the data you read from the web myreadtable = @(filename)readtable(filename,'FileType','text','Delimiter',',','ReadVariableNames',false); % set up the function handle as ContentReader for webread function options = weboptions('ContentReader',myreadtable); % get the data votes = webread(url,options); % get the variable names votes.Properties.VariableNames = {'party',... 'handicapped_infants',... 'water_project_cost_sharing',... 'adoption_of_the_budget_resolution',... 'physician_fee_freeze',... 'el_salvador_aid',... 'religious_groups_in_schools',... 'anti_satellite_test_ban',... 'aid_to_nicaraguan_contras',... 'mx_missile',... 'immigration',... 'synfuels_corporation_cutback',... 'education_spending',... 'superfund_right_to_sue',... 'crime',... 'duty_free_exports',... 'export_administration_act_south_africa'}; % display the small portion of the data disp(votes(1:5,1:4)) % turn table into matrix C = table2array(votes); % create a logical array of yes votes arr = strcmp(C(:,2:end), 'y'); % append logical array of no votes arr = [arr, strcmp(C(:,2:end), 'n')]; % create a logical arrays of party affiliation dem = strcmp(C(:,1),'democrat'); rep = strcmp(C(:,1),'republican'); % combine them all arr = [dem,rep,arr]; % create cell to hold the votes of each member of congress votes = cell(size(dem)); % for each member, find the indices of the votes for i = 1:length(dem) votes{i} = find(arr(i,:)); end % variable names that maps to the indices vars = {'Democrat',... 'Republican',... 'Handicapped Infants = Yes',... 'Water Project = Yes',... 'Budget Resolution = Yes',... 'Physician Fee Freeze = Yes',... 'El Salvador = Yes',... 'Religious Groups = Yes',... 'Anti Satellite = Yes',... 'Nicaraguan Contras = Yes',... 'Mx Missile = Yes',... 'Immigration = Yes',... 'Synfuels Corp = Yes',... 'Education Spending = Yes',... 'Right To Sue = Yes',... 'Crime = Yes',... 'Duty Free = Yes',... 'South Africa = Yes',... 'Handicapped Infants = No'..., 'Water Project = No',... 'Budget Resolution = No',... 'Physician Fee Freeze = No',... 'El Salvador = No',... 'Religious Groups = No',... 'Anti Satellite = No',... 'Nicaraguan Contras = No',... 'Mx Missile = No',... 'Immigration = No',... 'Synfuels Corp = No',... 'Education Spending = No',... 'Right To Sue = No',... 'Crime = No',... 'Duty Free = No',... 'South Africa = No'};
Get
the MATLAB code
Published with MATLAB® R2014b
Deflategate
Corey: Hey guys, did you hear about the Patriots "Deflategate" story?
Zack: Yeah, they say that 11 out of 12 balls were inflated 2 PSI less than the minimum value!
Corey: There could be many reasons why the balls were measured at the prescribed 12.5 PSI before the game, but lose pressure by halftime.
Matt: Do you think they cheated, or that this situation can be explained by some math and physics?
Seth: We should make a Simulink model of the thermodynamics involved in that story.
Guy: My thermodynamics knowledge is pretty rusty, but I bet that with Simscape we can build up something in just a few minutes.
A few minutes later...
Guy: Et voilà! Here is what the model looks like:
Seth: Good job guys! Can you explain to us how you built that?
Zack: Sure! As you can see, the football is modeled using a Constant Volume Pneumatic Chamber. To the thermal port, we connected in series the convection inside, the conduction through the leather and the convection outside.
Corey: For the outside ambient temperature, we took the historical data from a weather station in nearby Norwood, MA.
Guy: And the Compressor Control subsystem is there to first inflate the ball with air at a fixed temperature up to a desired pressure. A Stateflow chart turns off the inflating action for the rest of the simulation once the ball first gets up to the prescribed pressure.
Matt: Great, I heard a few theories that the ball being inflated with hot air could explain this pressure drop. Can we confirm that?
The Results
Corey: With this model, we are able to follow the evolution of the temperature and pressure inside the ball throughout the game day.
Zack: The story begins inside, with an ambient temperature of 73 degrees Fahrenheit. To test the hot air hypothesis, the ball is inflated with air at 150 degrees Fahrenheit. As you can see, this takes a bit less than a minute, and almost immediately after stopping to feed hot air, the air inside the ball cools down to ambient temperature.
Matt: I guess that rules out the hot air theory. I am curious though, I am sure I read somewhere that the football should take longer than that to cool down when taken from inside to outside.
Guy: We are getting there... One interesting thing I found is that the component with the most impact here is the thermal mass of the leather. This is why the hot air is immediately cooled down by the leather at ambient temperature.
Corey: The simulation times all correspond to what happened, or at least what's been reported. The balls are filled roughly 2 hours before game time, and not taken outside until just minutes before kickoff. As the simulation goes, we then move the ball outside, to about 50 degrees Fahrenheit, and then it takes roughly 20 minutes for the air inside to stabilize at the new ambient temperature.
Zack: The ideal gas law governs the magnitude of the pressure drop  roughly 1 PSI when the air temperature changes from 73 to 50 degrees Fahrenheit. But thanks to Simscape we can see the time it takes to get there.
Seth: And I can see that at halftime you simulate the ball being taken back inside and warming up.
Matt: We simulated the ambient temperature going back up to room temperature, assuming that the refs took the balls into their locker room to test them. If they stuck a gauge into the footballs right away, the pressure would still have read low. But wait a few more minutes…
Corey: When combined with other possible effects like accuracy of the gauge and leakage, it seems pretty obvious that the Patriots didn't cheat.
Guy: Corey, you are too much of a fan... The results of our simulation show how the ball pressure and temperature varied through that day, but in no way can they confirm what happened that day... too many unknowns.
Now it's your turn
If you are interested, you can download our simulation here.
Do you also use Simulink or Simscape to analyze controversial topics? Let us know by leaving a comment here.
And one last thing ... if you want to know just which way we might be biased on this issue, well, just check out where our corporate headquarters are located.
]]>Today's post is by Sean de Wolski and me. We spent a few days this week at training for the start of the year, held in a Sheraton hotel.
At the completion of our stay, we each got a note in our rooms. Here is a sample.
Actually we read this in with the App from the Image Acquisition Toolbox and cropped the image (imcrop from Image Processing Toolbox) to just display the mysterious binary text.
Using functionality from the Computer Vision System Toolbox, we then converted our binary text using the ocr (Optical Character Recognition) function.
Icrop = imread('Sher.png');
bintext = ocr(Icrop)
str = bintext.Words;
str = [str{:}];
bintext = ocrText with properties: Text: '010101000110100001100001011011100110101100100...' CharacterBoundingBoxes: [879x4 double] CharacterConfidences: [879x1 single] Words: {14x1 cell} WordBoundingBoxes: [14x4 double] WordConfidences: [14x1 single]
Next guess was how the text was encoded  4 bits, 8 bits? We figured 8 was most likely.
newstr = reshape(str',8,[])';
Let's look at the first few lines.
snip = newstr(1:3,:)
snip = 01010100 01101000 01100001
Could these be text?
bin2dec(snip)
ans = 84 104 97
These could be ASCII characters!
Have you ever misread an O for a 0? Or an l for a 1? We have! So we decided to program defensively for those and any other weird swaps that might confuse the OCR.
newstr(newstr == 'O') = '0'; newstr(newstr ~='0') = '1';
Finally let's write out the result.
message = char(bin2dec(newstr))'; disp(message(1:74)) disp(message(75:end))
Thank you for your continued loyalty to Sheraton Boston. It is a pleasure to host Mathworks year after year.
Get
the MATLAB code
Published with MATLAB® R2014b
by David Wey
Have you ever wondered who created the file you’re looking at on the File Exchange, or who that nice person was that answered your question on MATLAB Answers? Maybe you thought to yourself I wonder what else that person has done? As of this week answering these questions got a lot easier.
NEW PROFILE PAGES
Our latest update to MATLAB Central includes new profile pages. These pages aggregate content from across all the MATLAB Central areas, thus easing the pain of having separate profile pages for each area (Answers, Cody, File Exchange, and so on). On these profile pages you’ll be able to see how active the person has been and how recently they’ve participated in the community. You will see all their answers, files, Cody problems, trends, and links.
To view someone’s new profile page, click on any author link wherever you see it, whether in Cody or the File Exchange. The popup displays a summary of information related to that area of MATLAB Central (such as the File Exchange or Answers). Click again on the user name to view the profile page or click on the other labels to see their submissions as a filtered search.
We hope you’ll enjoy the new profile pages and we welcome your thoughts and reactions. Check out some of your favorite community member pages, as well as your own, and let us know what you think in the comments below.
]]>I promised earlier to post about divergent colormaps. A divergent colormap is usually constructed by concatenating two colormaps together that have different color schemes.
Here is an example of a divergent colormap from www.ColorBrewer.org by Cynthia A. Brewer, Geography, Pennsylvania State University.
rgb = [ ... 94 79 162 50 136 189 102 194 165 171 221 164 230 245 152 255 255 191 254 224 139 253 174 97 244 109 67 213 62 79 158 1 66 ] / 255; b = repmat(linspace(0,1,200),20,1); imshow(b,[],'InitialMagnification','fit') colormap(rgb)
This colormap has the brightest color in this middle, which is typical for divergent colormaps.
A divergent colormap is used to compare data values to a reference value in a way that visually highlights whether values are above or below the reference.
Let me show you an example using sea surface temperatures. I learned today that 28 degrees Celsius has a specific meaning in hurricane modeling and prediction. It is considered to be the minimum temperature for hurricane formation.
Using the NASA EOSDIS Reverb tool, I found and downloaded sea surface temperature from the Aqua satellite for August 1, 2011. The data arrived as a netCDF file.
amsre = ncread('20110801AMSREREMSSL2P_GRIDDED_25amsre_20110801rtv01.nc',... 'sea_surface_temperature'); size(amsre)
ans = 1440 720 2
The two planes contain data from two different orbits.
orbit1 = flipud(amsre(:,:,1)'); orbit2 = flipud(amsre(:,:,2)'); imshow(orbit1,[],'InitialMagnification','fit') title('Orbit 1')
imshow(orbit2,[],'InitialMagnification','fit') title('Orbit 2')
The black regions are NaNs representing missing data. Combine the two orbits by taking the maximum.
sst = max(orbit1,orbit2); imshow(sst,[],'InitialMagnification','fit')
Now grab a subset of the data that includes a portion of the Atlantic Ocean.
ssta = sst(180:400,330:700); imshow(ssta,[],'InitialMagnification','fit')
Apply the divergent colormap.
colormap(rgb) colorbar
OK, I see one problem right off the bat. It looks like our data is in Kelvin. Let's fix that.
ssta = ssta  273.15; imshow(ssta,[],'InitialMagnification','fit') colormap(rgb) colorbar
Now I want to make use of our divergent colormap to highlight the temperature of interest, 28 degrees. To do that, I'll use the CLim property of the axes object. The CLim property is a twoelement vector. The first element tells us the data value that maps to the lowest colormap color, and the second element tells us the data value that maps to the highest colormap color. MATLAB computes these values automatically based on the range of the data.
get(gca,'CLim')
ans = 1.9500 33.6000
To get the middle colormap color to represent 28 degrees, we need to pick our two CLim values so that they are equally far away from 28 degrees. After some experimentation, I settled on a range from 22 degrees to 34 degrees.
set(gca,'CLim',[22 34])
That's looking better, but it isn't helpful to have the missing data displayed using the bottom color of the colormap. I will fix that problem using this procedure.
Here are just the first two steps.
mask = isnan(ssta); graymask = 0.5 * mask; graymask = repmat(graymask,1,1,3); hold on h = imshow(graymask); hold off
The missing values are displayed as gray, but we can't see the sea surface temperatures. Fix that by using the NaNmask as the overlay image's AlphaData.
h.AlphaData = mask; % This syntax requires MATLAB R2014b.
Temperatures close to the threshold are displayed using yellow, the brightest color in the colormap and the one right in the middle. Temperatures above the threshold are displayed using oranges and reds, while temperatures below are displayed using greens and blues.
Before I go, I want to give a shout to the new Developer Zone blog that just started today. If you are into advanced software developer topics, I encourage you to head over there and take a look.
Get
the MATLAB code
Published with MATLAB® R2014b
If $n$ is odd, replace $n$ by $3n+1$, if not, replace $n$ by $n/2$. Repeat. A famous conjecture made by Lothar Collatz is that no matter what value of $n$ is chosen to start, the process eventually terminates at $n=1$. Do not expect a proof, or a counterexample, in this blog.
You might want to download function threenplus1.m from my textbook Numerical Computing with MATLAB. This not only produces the graphs in this post, it also has a pair of uicontrols that allow you to increase and decrease the value of n.
Here is the core of threenplus1. The generated sequence is collected in a vector y. We don't know ahead of time how long y is going to be. In fact, that's the point of the computation. Does the while loop terminate and, if so, how long is y?. The ability of MATLAB to grow a vector an element at a time comes in very handy here.
dbtype 44:52 threenplus1
44 y = n; 45 while n > 1 46 if rem(n,2)==0 47 n = n/2; 48 else 49 n = 3*n+1; 50 end 51 y = [y n]; 52 end
A good example is provided by starting with $n = 7$. Here is the sequence collected in y. Its length is 17 and its max is 52.
7 22 11 34 17 52 26 13 40 20 10 5 16 8 4 2 1
Here is the plot. The yaxis is logarithmic in anticipation of some fairly large values in later runs. Notice that as soon as the sequence hits a power of 2, it plunges to termination.
It is interesting to see how the graphs produced by threenplus1 change from one value of $n$ to the next. Here are the first ten elements of y for $n$ = 108, 109, and 110.
108 54 27 82 41 124 62 31 94 47 . . . 109 328 164 82 41 124 62 31 94 47 . . . 110 55 166 83 250 125 376 188 94 47 . . .
After the first eight steps all three sequences reach the same value, so they are identical after that. The length is 114 and the max is 9232. Here are the three graphs, superimposed on one another. The similarity between the three sequences is very apparent when you use the increment n buttons provided by threenplus1.
Prime numbers of the form $n = 2^p1$ where $p$ itself is prime are known as Mersenne primes. But that's a whole 'nother story. For now, suffice it to say that such numbers are candidates to produce 3n+1 sequences with a high maximum value. Here is $n = 2^{13}1$. The sequence reaches 6,810,136 before it eventually terminates in 159 steps.
I have to admit that this triggers a bug in some versions of MATLAB. If you see specious characters in the formatting of the legend on the yaxis, insert this line near the end of threenplus1.
set(gca,'yticklabel',int2str(get(gca,'ytick')'))
Check out the two references below about orbits associated with the Collatz conjecture.
When I Google "three n plus 1", the first link is to an excellent Wikipedia article on the "Collatz conjecture". According to Wikipedia, the famous German mathematican Lothar Collatz first made the conjecture, in 1937, that the process terminates for any starting value. I am already familiar with Collatz's name because of his work in numerical analysis and approximation theory, including finite difference methods for differential operators. My mentor John Todd had some contact with Collatz shortly after the end of World War II that could be the subject of a blog post some day.
Many others have also been interested in the 3n+1 problem. Stanislaw Ulam (see my previous blog post), Shiszuo Kakutani, Sir Bryn Thwaites, Helmut Hasse, Paul Erdos, John Horton Conway, Douglas Hofstadter, and Richard Guy. But Wikipedia quotes Paul Erdos as saying about the Collatz conjecture: "Mathematics may not be ready for its solution."
Wikipedia, Collatz Conjecture, <http://en.wikipedia.org/wiki/Collatz_conjecture>
Jason Davies, Collatz Conjecture Orbits, <http://www.jasondavies.com/collatzgraph>
xkcd, Collatz Conjecture Orbits, <http://xkcd.com/710>
Get
the MATLAB code
Published with MATLAB® R2014b
The first game I thought of was Connect Four, since it’s a straightforward extension of Tic Tac Toe. And sure enough, it’s not to hard to convert our Tic Tac Toe engine into a Connect Four engine.
CONNECT FOUR
In Connect Four, your board is a 6by7 grid, and your goal is to be the first to put four pieces in a line. The game board is oriented vertically, so gravity causes your pieces to drop to the bottom of the grid. This means that you have at most seven legal moves. This is good for us, since it constrains the number of possible moves, which in turn makes it quicker for us to play hundreds of games.
The game harness botMoves doesn’t change at all. We just make a new class, ConnectFour, that knows the rules and can draw the Connect Four game board. And when I say “knows the rules”, I mean exactly two things.
That’s it.
Finding legal moves is easy. We just look for any column that has zeroes left in it. But how do you determine if someone has won the game? We need to scan the board for fourinarow of either color in any of four directions: vertical, horizontal, and two diagonals. This seemed like it was going to call for some careful thinking, until I remembered that there was a Cody problem about this very topic! See Problem 90. Connect Four Win Checker.
I shopped around in the answers to Problem 90 and found a nice piece of code authored by long time Cody champ @bmtran: Solution 2314.
I wired everything together, and it plays a pretty mean game. Here’s an animation of my Connect Four bot playing itself.
CONNECT FOUR VARIANTS
The Monte Carlo approach means that we haven’t had to put a lot of special gamespecific smarts into the code. This gives us great freedom to explore variants on our basic Connect Four game. Suppose, instead of racing to get four in a row, players were trying to be the first to make an “L” shape. In order to make this work, I just had to change a few lines in the isGameOver method. And rather than clone and modify the entire ConnectFour class, I built a FourGameBase class from which I could quickly subclass variants. This base class knows how to draw the board and pick legal moves. The winning conditions are handled by the subclasses.
Here’s my ConnectEl game bot playing itself.
We can imagine a whole family of Connect Tetris variants: be the first to make a square, or a Tshape, or an Sshape. Not all of these variants are interesting as games. I built square and Tshape versions of the game, and in both cases defensive play was so easy that, as with Tic Tac Toe, all the games ended in ties. But this ability to rapidly create and play games leads to another observation. We can use our Monte Carlo code to mutate games in search of interesting variants. An interesting game is one in which either side has a reasonable chance of winning. An uninteresting game consistently ends in a tie or victory for a particular side. So we can effectively use bots to do our playtesting for us. The optimal strategies simply emerge from the Monte Carlo soup.
I want to mention one last Connect Four variant before I move on: Four Corners. In Four Corners, you try to force your opponent to be the first to put his pieces into the four corners of a rectangle.
In this case, I used code from a Cody answer by Alfonso NietoCastanon to test for the victory condition: Spot the rectangle.
Here is my bot slugging it out with itself in Four Corners.
This is a fun game to play against the bot. It’s surprisingly tricky to spot all the potential rectangles on the board.
REVERSI
For my last example, I wrote a program that does a passable job of playing Reversi. As before, I used Cody to diminish the coding effort by posting two problems.
Here’s the animation of the game in action.
WHAT NEXT?
If you, like me, are intrigued with the possibilities of Monte Carlo gaming, I encourage you to take a look at my code on GitHub. Maybe you can add some new games to the list that I’ve already created. There are plenty of games that would work with this harness. Or you can improve the code that’s already there, making it speedier or more efficient. Because of the brute force nature of the Monte Carlo technique, it would be an ideal place to experiment with parallel programming. Or you might make it more interactive, with a clicktomove interface.
I’ll stop here, but this is clearly a rich space to explore. It’s a powerful reminder that computers are very different from brains. Sometimes you can make up for not being clever by being stupid very fast.
]]>Simulation Target
If you click play in the following model:
You will notice that a folder named slprj with a lot of code and 4 MEXfiles will be created:
Simulink Preferences
If you do not want the slprj and MEXfiles to be generated in your current directory, it is possible to go in the Simulink Preferences and specify a Simulation Cache Folder and a Code Generation Folder to be used instead of the local directory.
Overriding the Simulink Preferences
Now, let's say I need to often switch between different projects, and each of those projects need to use different folders for code generation. In that case, it is possible to use Simulink.fileGenControl. For example, if I include Simulink Project in my workflow, the startup shortcut for my project could look like:
project = simulinkproject; projectRoot = project.RootFolder; myCacheFolder = fullfile(projectRoot, 'work'); myCodeFolder = fullfile(projectRoot, 'code'); Simulink.fileGenControl('set',... 'CacheFolder', myCacheFolder,... 'CodeGenFolder', myCodeFolder,... 'createDir', true)
That way, all the code and MEXfiles generated by the model will go in those folders and the current directory will remain clean.
Once you are done working on this project, you can restore the values stored in the Simulink Preferences using this line:
Simulink.fileGenControl('reset')
Don't Clean Up: Faster Initialization
Simulink checks the cached files at initialization to make sure they are up to date and match the model. This prevents regenerating files, and results in faster initialization. If you regularly work with the same models, keeping these derived files around can help you save time when switching between projects.
Now it's your turn
Let us know how you use this feature by leaving a comment here
]]>Today I’d like to introduce guest blogger Ken Atwell who works for the MATLAB Development team here at MathWorks. Today, Ken will be discussing with you the MapReduce programming technique now available in the R2014b release of MATLAB. MapReduce provides a way to process large amounts of filebased data on a single computer in MATLAB. For very large data sets, the same MATLAB code written using MapReduce can also be run on the "big data" platform, Hadoop®.
The data set we will be using consists of records containing flight performance metrics for USA domestic airlines for the years 1987 through 2008. Each year consists of a separate file. If you have experimented with "big data" before, you may already be familiar with this data set. The full data set can be downloaded from here. A small subset of the data set, airlinesmall.csv, is also included with MATLAB® to allow you to run this and other examples without downloading the entire data set.
MapReduce is a programming technique that is used to "divide and conquer" big data problems. In MATLAB, the mapreduce function requires three input arguments:
This is an oversimplification to some extent, since the output of a call to the map function can be shuffled and combined in interesting ways before being passed to the reduce function. This will be examined later.
Let's look at a straightforward example to illustrate how mapreduce is used. In this example we want to find the longest flight time out of all the flights recorded in the entire airline data set. To do this we will:
datastore is used to access collections of tabular text files stored on a local disk, or the Hadoop® Distributed File System (HDFS™). It is also the mechanism for providing data in a chunkwise manner to the map calls when using mapreduce. A previous blog post explained how datastore works and how it is used for reading collections of data that are too large to fit in memory.
Let's create our datastore and first preview the data set. This allows us to peek into the data set, and identify the format of the data and the columns that contain the data we are interested in. The preview will normally provide a small chunk of data that contains all the columns present in the data set, although I will only show a handful of columns to make our article more readable.
ds = datastore('airlinesmall.csv', 'DatastoreType', 'tabulartext', ... 'TreatAsMissing', 'NA'); data = preview(ds); data(:,[1,2,3,9,12])
ans = Year Month DayofMonth UniqueCarrier ActualElapsedTime ____ _____ __________ _____________ _________________ 1987 10 21 'PS' 53 1987 10 26 'PS' 63 1987 10 23 'PS' 83 1987 10 23 'PS' 59 1987 10 22 'PS' 77 1987 10 28 'PS' 61 1987 10 8 'PS' 84 1987 10 10 'PS' 155
For the computation we are looking to perform, we only need to look at the "ActualElapsedTime" column: it contains the actual flight time. Let's configure our datastore to only provide this one column to our map function.
ds.SelectedVariableNames = {'ActualElapsedTime'};
Now we will write our map function (MaxTimeMapper.m). Three input arguments will be provided to our map function by mapreduce:
Our map function finds the maximum value in the table 'data', and saves a single key ('MaxElapsedTime') and corresponding value to the intermediate data storage object. We will now go ahead and save the following map function (MaxTimeMapper.m) to our current folder.
function MaxTimeMapper(data, ~, intermediateValuesOut) maxTime = max(data{:,:}); add(intermediateValuesOut, 'MaxElapsedTime', maxTime); end
Next, we create the reduce function (MaxTimeReducer.m). Three input arguments will also be provided to our reduce function by mapreduce:
Our reduce function will receive a list of values associated with the 'MaxElapsedTime' key which were generated by the calls to our map function. The reduce function will iterate through these values to find the maximum. We create the following reduce function (MaxTimeReducer.m) and save it to our current folder.
function MaxTimeReducer(~, intermediateValuesIn, finalValuesOut) maxElapsedTime = inf; while hasnext(intermediateValuesIn) maxElapsedTime = max(maxElapsedTime, getnext(intermediateValuesIn)); end add(finalValuesOut, 'MaxElapsedTime', maxElapsedTime); end
Once the map and reduce functions are written and saved in our current folder, we can call mapreduce and reference the datastore, map function, and reduce function to run our calculations on our data. The readall function is used here to display the results of the MapReduce algorithm.
result = mapreduce(ds, @MaxTimeMapper, @MaxTimeReducer); readall(result)
Parallel mapreduce execution on the local cluster: ******************************** * MAPREDUCE PROGRESS * ******************************** Map 0% Reduce 0% Map 100% Reduce 100% ans = Key Value ________________ ______ 'MaxElapsedTime' [1650]
If Parallel Computing Toolbox is available, MATLAB will automatically start a pool and parallelize execution of the map functions. Since the number of calls to the map function by mapreduce corresponds to the number of chunks in the datastore, parallel execution of map calls may speed up overall execution.
The use of keys is an important feature of mapreduce. Each call to the map function can add intermediate results to one or more named "buckets", called keys.
If the map function adds values to multiple keys, this leads to multiple calls to the reduce function, with each reduce call working on only one key's intermediate values. The mapreduce function automatically manages this data movement between the map and reduce phases of the algorithm.
The behavior of the map function in this case is more complex and will illustrate the benefits to using multiple keys to store your intermediate data. For every airline found in the input data, use the add function to add a vector of values. This vector is a count of the number of flights for that carrier for each day in the 21+ years of data. The carrier code is the key for this vector of values. This ensures that all of the data for each carrier will be grouped together when mapreduce passes it to the reduce function.
Here's our new map function (CountFlightsMapper.m).
function CountFlightsMapper(data, ~, intermediateValuesOut)
dayNumber = days((datetime(data.Year, data.Month, data.DayofMonth)  datetime(1987,10,1)))+1;
daysSinceEpoch = days(datetime(2008,12,31)  datetime(19875,10,1))+1;
[airlineName, ~, airlineIndex] = unique(data.UniqueCarrier);
for i = 1:numel(airlineName) dayTotals = accumarray(dayNumber(airlineIndex==i), 1, [daysSinceEpoch, 1]); add(intermediateValuesOut, airlineName{i}, dayTotals); end end
The reduce function is less complex. It simply iterates over the intermediate values and adds the vectors together. At completion, it outputs the values in this accumulated vector. Note that the reduce function does not need to sort or examine the intemediateKeysIn values; each call to the reduce function by mapreduce only passes the values for one airline.
Here's our new reduce function (CountFlightsReducer.m).
function CountFlightsReducer(intermediateKeysIn, intermediateValuesIn, finalValuesOut)
daysSinceEpoch = days(datetime(2008,12,31)  datetime(1987,10,1))+1;
dayArray = zeros(daysSinceEpoch, 1);
while hasnext(intermediateValuesIn) dayArray = dayArray + getnext(intermediateValuesIn); end add(finalValuesOut, intermediateKeysIn, dayArray); end
To run this new analysis, just reset the datastore and select the active variables of interest. In this case we want the date (year, month, day) and the airline carrier name. Once the map and reduce functions are written and saved in our current folder, mapreduce is called referencing the updated datastore, map function, and reduce function.
reset(ds); ds.SelectedVariableNames = {'Year', 'Month', 'DayofMonth', 'UniqueCarrier'}; result = mapreduce(ds, @CountFlightsMapper, @CountFlightsReducer); result = result.readall();
Parallel mapreduce execution on the local cluster: ******************************** * MAPREDUCE PROGRESS * ******************************** Map 0% Reduce 0% Map 100% Reduce 50% Map 100% Reduce 100%
Up until this point we have been only analyzing the sample data set (airlinesmall.csv). To look at the number of flights each day over the full dataset, let's load the results of running our new MapReduce algorithm on the entire data set.
load airlineResults
Before looking at the number of flights per day for the top 7 carriers, let us first apply a filter to the data to smooth out the effects of weekend travel. This would otherwise clutter the visualization.
lines = result.Value; lines = horzcat(lines{:}); [~,sortOrder] = sort(sum(lines), 'descend'); lines = lines(:,sortOrder(1:7)); result = result(sortOrder(1:7),:); lines(lines==0) = nan; for carrier=1:size(lines,2) lines(:,carrier) = filter(repmat(1/7, [7 1]), 1, lines(:,carrier)); end
figure('Position',[1 1 800 800]); plot(datetime(1987,10,1):caldays(1):datetime(2008,12,31),lines) title ('Domestic airline flights per day per carrier') xlabel('Date') ylabel('Flights per day (7day moving average)') try carrierList = readtable('carriers.csv.txt', 'ReadRowNames', true); result.Key = cellfun(@(x) carrierList(x, :).Description, result.Key); catch end legend(result.Key, 'Location', 'SouthOutside')
The interesting characteristic highlighted by the plot is the growth of Southwest Airlines (WN) during this time period.
The code we have just created was done so on our desktop computer and it allows us to analyze data that normally would not fit into the memory of our machine. But what if our data was stored on the "big data" platform, Hadoop® and the data set is too large to reasonably transfer to our desktop computer?
Using Parallel Computing Toolbox along with MATLAB® Distributed Computing Server™, this same code can be run on a remote Hadoop cluster. The map and reduce functions would remain unchanged, but two configuration changes are required:
After these configuration changes, the algorithm can then be run on Hadoop.
MATLAB Compiler can be used to deploy applications created using MapReduce, and it can also generate Hadoop specific components for use in production settings.
Learn more about MapReduce in MATLAB with our algorithm examples and let us know how you might see MapReduce being used to analyze your big data here.
Get
the MATLAB code
Published with MATLAB® R2014b
COMPUTERS, CHESS, AND GO
I read an article in IEEE Spectrum about computer programs that play Go (AIs Have Mastered Chess. Will Go Be Next?). If you review the history of gameplaying computers, you’ll see that chess programs improved steadily until eventually they could beat the best human players. Go programs, on the other hand, have been stuck at a level of play that was nowhere close to the best human. Why is that?
The basic element of a gameplaying program is the lookahead. Essentially, the program says “If I move here, is that better or worse than if I move there?” In chess, this is straightforward to evaluate. But in Go, this basic lookahead strategy doesn’t work so well. It’s much harder to evaluate whether one board position is stronger than another.
But recently, Go programs have started to get much better. So what happened?
TWO IDIOTS FINISH THE GAME
Go programs have improved by applying a Monte Carlo technique. It’s nothing like how a human plays, but it works remarkably well. And it only works because we can ask the computer to do a lot of dumb stuff very quickly. I call it “Two Idiots Finish the Game”.
Consider the following situation. You’ve reached a critical point in the game. We’ll call it position X. You’re considering
move A and move B. Which one should you make? Now instead of looking just one move ahead, play the game all the way to completion. But there’s an obvious problem with this. If you’re not smart enough to figure out your next move, how can you play an entire game? Simple: just ask two idiots to make random (but legal) moves until one of them wins. Then return the game to position X and have them play again. And again. And again and again. Sometimes they start with move A, and sometimes B. After your speedy but notsoclever friends have played a few thousand games, examine the record. Is an idiot (with an idiot for an opponent) more likely to win with move A or move B? Those simulated games will give you the answer. Here’s the amazing thing: the idiot’s best move is your best move too. Don’t ask one clever mouse to solve the maze. Release ten thousand stupid mice and follow the lucky ones. This is what cheap computation buys you.
What’s beautiful about this approach is that it’s completely free of strategy. You don’t need to build up special knowledge structures about any particular game. You just need to know what moves are legal and how the game ends.
TIC TAC TOE
As soon as I read about this technique, I wanted to try it in MATLAB. So let’s make a program that can play Tic Tac Toe (also known as Naughts and Crosses). I’ve written Tic Tac Toe programs in MATLAB before. I’ve tried to make them clever and I’ve tried to make them learn. It’s not that hard. What’s fun about this Monte Carlo approach is that, with minimal effort I can teach it a new game. In fact, it makes playing lots of games easy. With a little objectoriented programming, you can write a generic gameplaying harness. Then you just need to plug in some code that knows a few rules, and presto! You’ve got an instant gameplaying program.
Here’s what I did. I made a class called TicTacToe that knows the rules of the game and how to draw the board. Then I wrote a function called botMoves that can look at the game object and make the next move. The separation is very clean. All of the Monte Carlo logic mentioned above lives in botMoves.
I only need a short script to have the bot play itself.
game = TicTacToe; nSimulatedGames = 1000; while ~game.isGameOver botMoves(game,nSimulatedGames); end
The variable nSimulatedGames refers to the number of simulated games we’ll ask our idiot friends to play for each potential move. Here’s an animation of what it looks like in action.
As it happens, the computer always ties itself. That’s actually good news, since Tic Tac Toe is unwinnable if your opponent is the least bit clever. So our bot is smart enough to prevent itself from winning. A little playtesting shows that it’s smart enough to avoid losing to a human too. But if we prefer, we can make the program less competitive by lowering the number of simulated games it plays. If I only let it run ten simulated games for each possible move, I can beat it easily.
I haven’t displayed much of my code here in the blog, but you can get your hands on it at this GitHub repository: MonteCarloGames. Here is the TicTacToe class, and here is the botMoves function.
NEXT WEEK
This is the first of a twopart post. Next time we’ll show how quickly we can adapt our simple Tic Tac Toe harness for other games. We’ll also bring a community element into our programming. We’ll use Cody to source some of the tricky parts of our coding effort!
Jiro's pick this week is num2words by Stephen Cobeldick.
Just recently, I witnessed a typo on a check payment. The amount was for "$19050" (unfortunately, it wasn't for me) but the written text read "One thousand nine hundred fifty dollars". Many people might wonder why someone would make such a simple mistake. Did the person miscount the number of zeroes?
I think I have an idea. This was written in Japan. In Japan, and in many other Asian countries, large numbers are grouped in myriads (every 10000). is called man (万), is called oku (億), is called cho (兆). However in Western counting, large numbers are grouped in 1000s  thousand, million, billion, trillion, etc. So, it's likely that the person writing the check wrote "19050", read it as "1 man 9050", and incorrectly translated to "1 thousand 9 hundred 50". Maybe...
Anyway, all of this could have been avoided if this person had access to Stephen's num2words! Just let his function spell out the number in words.
num2words(19050)
ans = nineteen thousand and fifty
This is the simplest use case, but num2words comes with a wealth of options to customize how the results are returned.
num2words(19050,'type','cheque')
ans = nineteen thousand and fifty dollars only
Stephen includes a detailed help in the function, so you can learn about all the options just by typing
help num2words
What gave me a little chuckle was that when I looked inside his function, I could see that he made sure that his function can count up to Uncentillion which is . So if you need to spell out a number larger than , you might be out of luck. :)
Comments
Give this a try, and let us know what you think here or leave a comment for Stephen.
Get
the MATLAB code
Published with MATLAB® R2014b
The prime spiral was discovered by Stanislaw Ulam in 1963, and featured on the cover of Scientific American in March, 1964.
Stanislaw Ulam (19091984) was an eminent PolishAmerican mathematician who spent most of his career at Los Alamos, but who also had connections with several American universities, especially the University of Colorado. His many illustrious colleagues included von Neumann, Teller, Metropolis, Erdos, Richtmyer, Rota, and Kac.
Ulam had a wide range of mathematical interests. He was deeply involved in the early development of computers at Los Alamos. He is one of the primary developers of the Monte Carlo method. Later in his career he made contributions to mathematical biology and medicine. He wrote popular books, including "Adventures of a Mathematician" and "Analogies Between Analogies".
As the story goes, Ulam was bored during a seminar in 1963 and began to doodle in his notebook. He numbered the integer lattice points in the plane in a spiral fashion and then highlighted the prime numbers. He was surprised to see that with this numbering the primes were not distributed randomly throughout the plane, but appeared to fall on diagonal segments.
Here is the 10by10 prime spiral, showing the spiral numbering and the 25 primes among the first 100 integers. The most prominent diagonal segment includes the primes 19, 7, 23, 47, and 79.
Ulam's numbering at any point $(i,j)$ in the plane is a piecewise quadratic function of $i$ and $j$. For example, the values along the upper and lower halves of the antidiagonal, or their offsets by one, (depending upon whether the size is odd or even), are perfect squares. So there are certainly not any primes along these diagonals.
This reminds me of the quadratics that generate primes. The most famous is Euler's polynomial,
$$ p(n) = n^2n+41 $$
This generates primes for $n$ from 1 through 40, but, obviously, not for $n$ equal to 41.
n = 1:41; p = n.^2n+41; isprime(p)
ans = Columns 1 through 13 1 1 1 1 1 1 1 1 1 1 1 1 1 Columns 14 through 26 1 1 1 1 1 1 1 1 1 1 1 1 1 Columns 27 through 39 1 1 1 1 1 1 1 1 1 1 1 1 1 Columns 40 through 41 1 0
The Euler primes themselves don't show up in the Ulam prime spiral. But each of the diagonal segments corresponds to a quadratic polynomial, although probably one that generates fewer than 40 primes.
The MATLAB demos directory has a spiral function that generates Ulam's numbering. I'm afraid that the code is ancient, and pretty bad. It's not too hard to write better code. This generates successively larger spirals by rotating one of a given size 180 degrees and then appending another column at the right and another row at the bottom to produce the next larger size.
type improved_spiral
function S = improved_spiral(n) % SPIRAL SPIRAL(n) is an nbyn matrix with elements ranging % from 1 to n^2 in a rectangular spiral pattern. S = []; for m = 1:n S = rot90(S,2); S(m,m) = 0; p = m^2m+1; v = (m1:1:0); S(:,m) = pv'; S(m,:) = p+v; end
Once we have the spiral numbering, the prime spiral is our old friend, the spy plot.
type primespiral_core.m
function primespiral(n) % PRIMESPIRAL PRIMESPIRAL(n) is Ulam's prime spiral. P = zeros(n,n); P(primes(n^2)) = 1; S = improved_spiral(n); P = P(S); spy(P)
Here are prime spirals of three different sizes.
The function primespiral in my textbook Numerical Computing with MATLAB includes a provision for starting the numbering at a value larger than one. Animating this feature emphasizes the diagonal nature of the prime locations. The title on this plot is the starting value of the numbering.
Get
the MATLAB code
Published with MATLAB® R2014b
And so...another year over, a new one just begun.
2014 saw some very big changes. MATLAB has changedsubtly. If you haven't tried R2014b yet, try itwe think you'll like it. The new MATLAB Graphics systemlongawaited! is live, and while there may be some growing pains, we're excited about the new things it will enable us to do. In this forum, we welcomed Sean as a lead blogger, and Brett stepped aside to pursue other projects. (He's still around, contributing now and then ;) !)
And once again, the year has seen big growth of MATLAB Central, and lots of big additions to the File Exchange. So without further ado, here is the retrospective of 2014's Picks of the Week.
You can read the original blog posts for each of these picks by clicking on the names of the files and following the links to the "Pick of the Week" tag. As always, your comments are welcome.
Get
the MATLAB code
Published with MATLAB® R2014b
Title/Author/Summary/Notes  Image 

__________________________________________________________________________  
Smooth 3D bezier curves with implicit control points
Uses Hobby's algorithm to plot smooth curves in 3D through specified control points


__________________________________________________________________________  
Sphere Fit (least squared)
Fits a sphere to a set of noisy data. Does not require a wide arc or many points.


__________________________________________________________________________  
Texas Hold'Em Poker Analyzer
Analyzes the win frequency and expected return of a hand in a poker game called Texas Hold'Em.


__________________________________________________________________________  
URL Filter
Scrape one or more numbers off of a web page


__________________________________________________________________________  
International Flight Route Planning Simulator using Aerospace Blockset
Example that shows how you can visualize airline routes on the globe.


__________________________________________________________________________  
Bellagio Fountain Simulation
Simulation of the Bellagio Fountain (100 nozzles)


__________________________________________________________________________  
Polygon simplification
Simplifies a closed polygon by reducing the number of vertices to a specified number


__________________________________________________________________________  
Worker Object Wrapper
Simplifies managing resources such as large data within PARFOR loops and SPMD blocks


__________________________________________________________________________  
Draw a Polar Dendrogram
Draws a polar dendrogram.


__________________________________________________________________________  
Convert seconds to human readable string(s)
Convert an amount of seconds to hours/days/weeks/etc.


__________________________________________________________________________  
Straightforward COPY and PASTE functions
Allows very simple manual exchange with other applications through the clipboard.


__________________________________________________________________________  
Packing Santa's Sleigh
This is the solution by team alfnie to the Mathworks/Kaggle Packing Santa's Sleigh competition


__________________________________________________________________________  
2048 MATLAB Edition
This is a MATLAB implementation of the 2048 game


__________________________________________________________________________  
Communications System Toolbox Support Package for RTLSDR Radio
MathWorks Communications System Toolbox Team Design and prototype softwaredefined radio (SDR) systems using RTLSDR with MATLAB and Simulink


__________________________________________________________________________  
kml2struct
Import a .kml file as a series of shapefile structs. Like kml_shapefile, but more stable.


__________________________________________________________________________  
BOT
Block diagram Optimization Tool


__________________________________________________________________________  
ISSTracker V1.0.0
International Space Station Live Tracking GUI


__________________________________________________________________________  
Wing Designer
Wing Designer computes aircraft performance measures from wing and engine parameters.


__________________________________________________________________________  
Automatic enveloping, variance change and activity detection with Hilbert Transform
Automatic Signal Segmentation and activity detection with Hilbert Transform and smoothing.


__________________________________________________________________________  
cspy.m
cspy: a colorcoded version of sparse matrix viewer spy


__________________________________________________________________________  
MultipleColon
multiple colonintervals


__________________________________________________________________________  
Configurable Simulink Model for DCDC Converters with PWM PI Control
A Simulink model configurable to buck, boost and buckboost DCDC converters with PWM PI control


__________________________________________________________________________  
Useful Figure Management Utilities
3 Useful figure management utilities.


__________________________________________________________________________  
CMEX Programming Tutorial Examples
This submission contains several C scripts that help as an introduction to CMEX programming.


__________________________________________________________________________  
module  encapsulate MATLAB package into a name space module
Modules are handy way to access packages in MATLAB.


__________________________________________________________________________  
legappend
Append new entries to an existing legend


__________________________________________________________________________  
parTicToc
This is a utility for timing PARFOR loops.


__________________________________________________________________________  
ThingSpeak support from MATLAB
MathWorks Internet of Things Team Prototype Internet of Things applications using ThingSpeak and MATLAB


__________________________________________________________________________  
PianoTuner
Graphical aid for tuning the middle octave of a piano


__________________________________________________________________________  
Reference Creator
Find intext citations and create the reference list!


__________________________________________________________________________  
Image Blur Metric
Measure the blur level of still image.


__________________________________________________________________________  
Blind image quality assessment through anisotropy
This measure discriminates blur and Gaussian noise.


__________________________________________________________________________  
Noise Level Estimation from a Single Image
It can precisely estimate noise level from a single image.


__________________________________________________________________________  
Simscape Language: Nonlinear rotational spring (Torque == kx^2)
Simscape language example showing how to use a MATLAB function in the equations section.


__________________________________________________________________________  
SimRF Models of Analog Devices RF Agile Transceivers
Simulate and verify agile RF transceivers and predict impact of RF imperfections using test signals


__________________________________________________________________________  
Colormouse
Mouse control of the color axis


__________________________________________________________________________  
makeinstall
Create a single install file for your toolbox distribution.


__________________________________________________________________________  
Makeinstall Technology
The technology behind the Makeinstall tool.


__________________________________________________________________________  
QuadSim
Parameter driven Simulink model for quadcopter simulation and control system design


__________________________________________________________________________  
Display progress, pause or stop a timeconsuming loop
Allows you to display a loop’s progress, pause (and also change related code) or stop it.


__________________________________________________________________________  
Simulink Dual Gravity Drained Tank
Process control simulation tutorial for PID tuning


__________________________________________________________________________  
votebar
function to plot 'electionlike' 3d bars


__________________________________________________________________________  
Fast Noise Estimation in Images
Estimate the standard deviation of the noise in a grayscale image.


__________________________________________________________________________  
Round with significant digits
Rounds towards the nearest number with N significant digits.


__________________________________________________________________________  
NaturalOrder Row Sort
Naturalorder row sort of a cell array of strings, with customizable numeric format.


__________________________________________________________________________  
NaturalOrder Filename Sort
Naturalorder sort of filenames or filepaths, with customizable numeric format.


__________________________________________________________________________  
Customizable NaturalOrder Sort
Naturalorder sort of a cell array of strings, with customizable numeric format.


__________________________________________________________________________  
Growth  Yet another Obfuscated MATLAB code
Obfuscated MATLAB code where the source looks like its output, with a functional programming demo.


__________________________________________________________________________  
Distribute figures
This function can distribute all open figures on different parts of the screen(s).


__________________________________________________________________________  
Koch curve
Interface for generating the Koch curve

Guest blogger, Kelly Luetkemeyer, who is a software developer at MathWorks, returns with an article on accessing RESTful web services using MATLAB. Kelly's previous articles included Tracking a Hurricane using Web Map Service and Visualizing the Gulf of Mexico Oil Slick using Web Map Service. Kelly was the lead developer of the new RESTful web service interface feature in MATLAB.
In R2104b, MATLAB introduced three new features to access RESTful web services:
To illustrate the use of these features, let's do a thought experiment and imagine that you are interested in applying for a software development job at MathWorks. This blog post illustrates how you can use RESTful web services to help in your job search. In particular it shows you how to:
The first step in your job search is to obtain a listing of all available jobs for your desired location. Job listings are available in an RSS feed, and you can use the function webread to read the feed. The RSS feed is an XML format. Specify 'xmldom' content type in order to return a Java Document Object Model for easy access to the data. Read the RSS feed and parse the data to create a listing of jobs of interest.
options = weboptions('ContentType','xmldom'); rssURL = 'http://www.mathworks.com/company/jobs/opportunities/rss.xml'; dom = webread(rssURL, options);
Retrieve all the items from the DOM that are tagged 'item'.
items = dom.getElementsByTagName('item');
itemsLength = items.getLength;
Create a table to contain the information from the RSS feed. Map the XML tag names to the table variable names by using two cell arrays. Initialize the table using cell2table.
tagNames = {'id', 'location', 'title', 'job_function', 'job_type'}; tableVariableNames = {'Reference', 'Location', 'Title', 'Department', 'Type'}; emptyItems = cell(itemsLength,length(tableVariableNames)); jobsTable = cell2table(emptyItems,'VariableNames',tableVariableNames);
Loop through each item and add the information to the table. Use zerobased indexing to obtain the node since the data is contained in a Java object. Use onebased indexing for the table array.
for k=0:itemsLength1 item = items.item(k); for n = 1:length(tagNames) node = item.getElementsByTagName(tagNames{n}); jobsTable{k+1,n} = {char(node.item(0).getTextContent)}; end end
Select your desired location.
myLocation = 'USMANatick';
rows = strcmpi(jobsTable.Location, myLocation);
natickJobs = jobsTable(rows,:);
natickJobs.Location = [];
Select your desired department.
myDepartment = 'software development';
rows = strcmpi(natickJobs.Department, myDepartment);
natickSoftwareJobs = natickJobs(rows,:);
natickSoftwareJobs.Department = [];
Display a table of possible jobs that match your request.
natickSoftwareJobs
natickSoftwareJobs = Reference Title Type _________ _________________________________________________________________________ ___________________ '14070' 'MATLAB Editor Software Engineer (Senior) (14070MCAR)' 'Experienced' '14064' 'Software Engineer  Graphical Platform (14064BWAL)' 'Experienced' '14049' 'Release Engineer (14049BHIL)' 'Experienced' '14028' 'Software Engineer (14028MCOL)' 'Experienced' '14027' 'Software Engineer (14027MCOL)' 'Experienced' '14023' 'Software Developer (14023MCOL)' 'Experienced' '13866' 'Controls Systems Engineer (13866MCAR)' 'Experienced' '13859' 'Software Engineer (13859MCOL)' 'Experienced' '13857' 'Web User Interface Developer (13857MCOL)' 'Experienced' '13856' 'Senior Software Engineer (13856MCOL)' 'Experienced' '13855' 'Software Engineer (13855MCOL)' 'Experienced' '13854' 'Software Engineer  GUI Components (13854MCOL)' 'Experienced' '13853' 'Build and Release Engineer  Perl / Shell (13853MCOL)' 'Experienced' '13851' 'Code Replacement Library Software Engineer (13851GMAR)' 'Experienced' '13848' 'Senior Performance Engineer (13848MCAR)' 'Experienced' '13793' 'Software Engineer  MATLAB Development Tools (13793MCAR)' 'Experienced' '13792' 'Software Engineer  Desktop / Toolbox Integration (13792MCAR)' 'Experienced' '13791' 'Autonomous Controls Systems Engineer (13791MCAR)' 'Experienced' '13789' 'Software Engineer  MATLAB Hardware (13789MCAR)' 'Experienced' '13779' 'Web Application Developer (13779JJUS)' 'Experienced' '13778' 'Software Engineer  Cloud Storage (13778JJUS)' 'Experienced' '13777' 'Senior Software Engineer  Web Services (13777JJUS)' 'Experienced' '13776' 'Senior Software Engineer  MATLAB Online (13776JJUS)' 'Experienced' '13775' 'Senior Technical Lead  MATLAB Web Services (13775JJUS)' 'Experienced' '13774' 'Java Software Engineer  Support Software Installer (13774JJUS)' 'Experienced' '13773' 'Java Software Engineer (13773JJUS)' 'Experienced' '13772' 'Mobile Software Engineer (13772JJUS)' 'Experienced' '13752' 'Software Engineer  Numerical Optimization (13752SMAR)' 'Experienced' '13751' 'Principal Software Engineer  Data Munging (13751SMAR)' 'Experienced' '13750' 'Senior Software Engineer  MATLAB Math (13750SMAR)' 'Experienced' '13747' 'Image Algorithms Software Engineer (13747SMAR)' 'Experienced' '13746' 'Software Engineer  Data Analysis (13746SMAR)' 'Experienced' '13745' 'Data Analysis Tools Developer (13745SMAR)' 'Experienced' '13743' 'Senior Software Engineer  Statistics and Machine Learning (13743SMAR)' 'Experienced' '13742' 'Senior MATLAB Web UI Engineer (13742SMAR)' 'Experienced' '13734' 'Senior Software Engineer  Application Deployment (13734MCAR)' 'Experienced' '13733' 'Software Engineer  Compiler Componentization (13733MCAR)' 'Experienced' '13732' 'UI Developer  Toolboxes & Addons (13732MCAR)' 'Experienced' '13730' 'Software Engineer  C+ (13730MCAR)' 'Experienced' '13729' 'Software Engineer  C++ (13729MCAR)' 'Experienced' '13689' 'Simulink Developer Engineer (13689KCAR)' 'Internships' '13668' 'CoOp: Design Automation Software Development (13668KCAR)' 'Internships' '13660' 'C++ Software Engineer Algorithms and Code Generation (13660GMAR)' 'Experienced' '13651' 'Compiler Engineer for FPGA Design (13651KCAR)' 'Internships' '13621' 'C++ Software Engineer  User Interfaces and Visualization (13621BWAL)' 'Experienced' '13613' 'MATLAB Web Applications Engineer  Intern (13613JJUS)' 'Internships' '13603' 'C++ Software Engineer Intern (13603KCAR)' 'Internships' '13594' 'Senior MATLAB Test Frameworks Engineer (13594BHIL)' 'Experienced' '13593' 'Software Engineer for Design Automation (13593KKOT)' 'Internships' '13577' 'Data Tools Development Intern (13577KCAR)' 'Internships' '13569' 'Senior Compiler Engineer (13569GMAR)' 'Experienced' '13566' 'Simulink Compiler Engineering Intern (13566KCAR)' 'Internships' '13563' 'LRM Updates to Verilog and SystemVerilog Emitters (13563KCAR)' 'Internships' '13558' 'Software Engineer GUI and Hardware Connectivity (13558GMAR)' 'Experienced' '13545' 'C++ Software Engineer  Graphical User Interfaces (13545BWAL)' 'Experienced' '13538' 'OAuth / LTI Integration Software Intern (13538KCAR)' 'Internships' '13532' 'Software Scope Infrastructure Internship (13532KCAR)' 'Temps/Consultants' '13531' 'Software Scope Infrastructure Internship (13531KCAR)' 'Temps/Consultants' '13530' 'FPGA Prototyping Intern (13530KCAR)' 'Internships' '13529' 'HDL Verifier Intern (13529KCAR)' 'Internships' '13518' 'Communications and Signal Processing HDL Development (13518KCAR)' 'Internships' '13516' 'FPGA Prototyping for Image, Video and Computer Vision (13516KCAR)' 'Internships' '13515' 'Audio Intern (13515KCAR)' 'Internships' '13508' 'C++ Software Developer  Discrete Event Simulation (13508GMAR)' 'Experienced' '13473' 'C++ Developer (13473MCAR)' 'Experienced' '13466' 'JavaScript Software Engineer (13466SMAR)' 'Experienced' '13422' 'Release Engineering Intern (13422KCAR)' 'Internships' '13412' 'Web Applications Development Intern (13412KCAR)' 'Internships' '13403' 'Documentation Toolsmith (13403JJUS)' 'Experienced' '13393' 'Robotics Software Developer (13393GMAR)' 'Experienced' '12383' 'C++ Software Engineer  Compiler Engineer for MATLAB (12383MCAR)' 'Experienced' '12382' 'Senior Compiler Engineer LLVM (12382GMAR)' 'Experienced' '12379' 'Senior Software Engineer  Internet Services Platform (12379JJUS)' 'Experienced' '12377' 'Senior Software Engineer  Graphics / OpenGL (12377SMAR)' 'Experienced' '12369' 'Build System Software Engineer (12369BHIL)' 'Experienced' '12365' 'Test Infrastructure Software Developer (12365BHIL)' 'Experienced' '12362' 'Software Engineer  Data Analysis Tools (12362SMAR)' 'Experienced' '12358' 'Web UI Developer  MATLAB Editor (12358MCAR)' 'Experienced' '12356' 'Software Developer  Graphic Export and Printing (12356SMAR)' 'Experienced' '12352' 'JavaScript Build Infrastructure Engineer (12352BHIL)' 'Experienced' '12351' 'Software Engineer  Maven Build and Deployment (12351BHIL)' 'Experienced' '12342' 'Web Application Developer (12342JJUS)' 'Experienced' '12324' 'Compiler Engineer  Parallel Computing (12324GMAR)' 'Experienced' '12322' 'Web UI Software Engineer (12322SMAR)' 'Experienced' '12320' 'Senior Web Application Developer (12320JJUS)' 'Experienced' '12317' 'Senior Software Engineer  Enterprise Solutions (12317MCAR)' 'Experienced' '12316' 'C++ Software Engineer  Simulink Architecture Group (12316GMAR)' 'Experienced' '12315' 'Computer Vision Systems Software Engineer (12315GMAR)' 'Experienced' '12312' 'Web Application Developer  Groovy on Grails/Java (12312BHIL)' 'Experienced' '12308' 'Analog and Mixed Signal Simulation Developer (12308GMAR)' 'Experienced' '12305' 'Compiler Engineer (12305GMAR)' 'Experienced' '12304' 'Software Engineer  Data Streaming and Visualization (12304BWAL)' 'Experienced' '12301' 'C++ Algorithm Optimization Software Engineer (12301GMAR)' 'Experienced' '12300' 'Senior Compiler Developer  Autogenerate C from Simulink (12300GMAR)' 'Experienced' '12284' 'C++ Algorithm Developer  Stateflow (12284GMAR)' 'Experienced' '12275' 'C++ Code Generation Software Engineer (12275GMAR)' 'Experienced' '12240' 'Java Software Engineer Distributed Application Development (12240BHIL)' 'Experienced' '12229' 'Senior Engineer, Requirements Modeling and Management (12229GMAR)' 'Experienced' '12227' 'Senior Engineer, Model Static Analysis (12227GMAR)' 'Experienced' '12217' 'Release Engineering Intern (12217BHIL)' 'Internships' '12174' 'Internal Tools Developer (12174BHIL)' 'Experienced' '12159' 'Release Engineer (12159BHIL)' 'Experienced' '12155' 'Web Software Test Infrastructure and Architecture Engineer (12155BHIL)' 'Experienced' '12072' 'Web GUI Engineer (12072SMAR)' 'Experienced' '12042' 'Web Application Developer  Commerce Systems (12042JJUS)' 'Experienced' '12029' 'Senior Software Engineer  Graphics / OpenGL (12029SMAR)' 'Experienced' '11941' 'Numerical Simulation Software Engineer (11941GMAR)' 'Experienced' '11883' 'Senior Software Engineer, Program Analysis/Formal Methods (11883GMAR)' 'Experienced' '11715' 'Product Release Engineer (11715BHIL)' 'New Graduate' '11669' 'Web UI Software Engineer (11669JJUS)' 'Experienced' '11506' 'Support Package for Mobile devices (11506SMAR)' 'Internships' '11471' 'DevOps Engineer  Cloud/SaaS (11471SMAR)' 'Experienced' '11431' 'Senior Compiler Engineer (11431MCAR)' 'Experienced' '11373' 'Web User Interface  Senior Software Developer (11373GMAR)' 'Experienced' '11302' 'Senior Technical Lead  Statistics / Machine Learning (11302SMAR)' 'Experienced' '11254' 'Senior Software Developer  Graphical Language Editors (11254BWAL)' 'Experienced' '11087' 'Web MATLAB Desktop UI Developer (11087MCAR)' 'Experienced' '10415' 'Senior C++ Compiler Engineer (10415MCAR)' 'Experienced' '10089' 'Senior Software Test Infrastructure Developer (10089BHIL)' 'Experienced' '10038' 'MATLAB Software Engineer  Web GUIs (10038SMAR)' 'Experienced' '9537' 'Senior Compiler Engineer (9537MCAR)' 'Experienced' '8916' 'MATLAB on the Web  Backend Developer (8916JJUS)' 'Experienced' '8904' 'Senior Software Engineer  Web UI Platform (8904JJUS)' 'Experienced' '8892' 'Simulink Compiler Engineer (8892GMAR)' 'Experienced'
Perhaps after searching through this list of jobs, you realize that you would prefer an application engineering job. You can do more searches by placing the commands that are illustrated above into a jobsReader function. The jobsReader function takes a filename as the first input followed optionally by the location and department strings. You can specify 'all' for either location or department to get a complete listing. Save the content to a file and then use your jobsReader function. You can save the RSS feed by using websave to save the contents to a file.
filename = 'rss.xml';
websave(filename, rssURL);
Specify your new job search criteria and obtain the data from the file.
myLocation = 'USMANatick'; myDepartment = 'Application Engineering'; applicationEngineeringJobs = jobsReader(filename,myLocation,myDepartment)
applicationEngineeringJobs = Reference Title Type _________ ____________________________________________________________________ _____________ '14075' 'Application Engineer HardwareintheLoop (14075SMAR)' 'Experienced' '11969' 'Senior Application Engineer (11969SMAR)' 'Experienced' '11943' 'Principal Application Engineer  HardwareintheLoop (11943SMAR)' 'Experienced'
Rather than saving the content to a file, you can specify a function handle as a content reader for webread. When the ContentReader property is specified in weboptions, webread downloads the content to a temporary file and reads the file using the specified function.
Specify an anonymous function handle to return application engineering jobs in Natick.
contentReader = @(filename)jobsReader(filename,myLocation,myDepartment);
options = weboptions('ContentReader',contentReader);
applicationEngineeringJobs = webread(rssURL,options)
applicationEngineeringJobs = Reference Title Type _________ ____________________________________________________________________ _____________ '14075' 'Application Engineer HardwareintheLoop (14075SMAR)' 'Experienced' '11969' 'Senior Application Engineer (11969SMAR)' 'Experienced' '11943' 'Principal Application Engineer  HardwareintheLoop (11943SMAR)' 'Experienced'
Use the Facebook Graph API to obtain more information about MathWorks. The Graph API returns a JSON string. webread parses the string into a structure.
facebookAPI = 'https://graph.facebook.com/'; company = 'MathWorks'; facebookURL = [facebookAPI company]; mathworks = webread(facebookURL)
mathworks = id: '19564567448' about: 'Over one million people around the world speak M...' can_post: 0 category: 'Computers/technology' category_list: [1x1 struct] checkins: 2063 company_overview: 'MathWorks products are used throughout the autom...' cover: [1x1 struct] description: 'MATLAB® is a highlevel language and interactive...' founded: '1984' has_added_app: 0 is_community_page: 0 is_published: 1 likes: 211377 link: 'https://www.facebook.com/MathWorks' location: [1x1 struct] mission: 'Technology Our goal is to change the world by a...' name: 'MathWorks' parking: [1x1 struct] phone: '(508) 6477000' talking_about_count: 2268 username: 'MathWorks' website: 'http://www.mathworks.com' were_here_count: 2063
Display overview information about MathWorks. Use line wrapping to display the text; otherwise it will get trimmed when publishing the blog post. Use the function strsplit to split the string at each white space into a cell array of strings. Compute indices indicating which elements to print. With the indices, use the function strjoin to construct the string to print. Use the function strtrim to ensure that each line begins with a character.
splitData = strsplit(mathworks.company_overview,' '); maxLengthOfLine = 80; lengthOfString = cellfun(@length, splitData) + 1; while ~isempty(splitData) elementsToPrint = cumsum(lengthOfString) <= maxLengthOfLine; if ~any(elementsToPrint) elementsToPrint(1) = true; end lineout = strjoin(splitData(elementsToPrint)); fprintf('%s\n', strtrim(char(lineout))); splitData = splitData(~elementsToPrint); lengthOfString = lengthOfString(~elementsToPrint); end
MathWorks products are used throughout the automotive, aerospace, communications, electronics, and industrial automation industries as fundamental tools for research and development. They are also used for modeling and simulation in increasingly technical fields, such as financial services and computational biology. MathWorks software enables the design and development of a wide range of advanced products, including automotive systems, aerospace flight control and avionics, telecommunications and other electronics equipment, industrial machinery, and medical devices. More than 5000 colleges and universities around the world use MathWorks solutions for teaching and research in a broad range of technical disciplines.
Display the MathWorks cover source image from Facebook. webread returns an MbyNby3 image when reading image/jpeg content. Use imshow to display the image.
RGB = webread(mathworks.cover.source); figure imshow(RGB)
You can use webread to read specific fields from Facebook by specifying a 'fields' query parameter. Use the query parameter syntax to read the 'mission' and 'likes' fields.
fields = 'mission,likes'; data = webread(facebookURL,'fields',fields)
data = mission: 'Technology Our goal is to change the world by accelerating ...' likes: 211377 id: '19564567448'
To understand more about MathWorks, print the mission statement. The function strprint is a local function created from the code in the previous section to split and print the company overview. In this case, use strsplit to split the mission statement into sentences.
sentences = strsplit(data.mission, '.'); for k = 1:length(sentences) fprintf('\n') if ~isempty(sentences{k}) sentence = sprintf('%s.\n', sentences{k}); strprint(sentence) end end
Technology Our goal is to change the world by accelerating the pace of discovery, innovation, development, and learning in engineering and science. We work to provide the ultimate computing environment for technical computation, visualization, design, simulation, and implementation. We use this environment to provide innovative solutions in a wide range of application areas. Business We strive to be the leading worldwide developer and supplier of technical computing software. Our business activities are characterized by quality, innovation, and timeliness; competitive awareness; ethical business practices; and outstanding service to our customers. Human We cultivate an enjoyable, vibrant, participatory, and rational work environment that nurtures individual growth, empowerment, and responsibility; appreciates diversity; encourages initiative and creativity; values teamwork; shares success; and rewards excellence. Social We actively support our communities and promote social and environmental responsibility. Learn more about our Social Mission.
Display the location information to assist in travel planning.
location = mathworks.location
location = city: 'Natick' country: 'United States' latitude: 42.3 longitude: 71.349 state: 'MA' street: '3 Apple Hill Dr' zip: '01760'
You can use a RESTful web service from the Federal Aviation Administration (FAA) to find out the local weather and determine whether any flight delays exist for the Boston Logan International Airport. The web service allows you to specify JSON format for the output.
faaURL = 'http://services.faa.gov/airport/status/'; airportCode = 'BOS'; url = [faaURL airportCode]; format = 'application/json'; options = weboptions('Timeout',10); data = webread(url,'format',format,options)
data = delay: 'false' IATA: 'BOS' state: 'Massachusetts' name: 'General Edward Lawrence Logan International' weather: [1x1 struct] ICAO: 'KBOS' city: 'Boston' status: [1x1 struct]
Display the time.
data.weather.meta
ans = credit: 'NOAA's National Weather Service' updated: '8:54 AM Local' url: 'http://weather.gov/'
Display the weather information, including temperature and wind speed.
data.weather
ans = visibility: 2 weather: 'Light Rain Fog/Mist' meta: [1x1 struct] temp: '43.0 F (6.1 C)' wind: 'Northwest at 10.4mph'
Display the airport status.
data.status
ans = reason: 'No known delays for this airport.' closureBegin: '' endTime: '' minDelay: '' avgDelay: '' maxDelay: '' closureEnd: '' trend: '' type: ''
If you have access to the Mapping Toolbox™, you can display a map showing you a route from Boston Logan International Airport to MathWorks. Use the gpxread function to read a sample GPX file that ships with the Mapping Toolbox and contains a route from Boston Logan to MathWorks. Overlay that route onto an OpenSteetMap base layer using the webmap function. Display the route using the wmline function.
route = gpxread('sample_route'); webmap('openstreetmap') wmline(route.Latitude,route.Longitude)
Print out turnbyturn directions.
turns = route(~cellfun(@isempty, route.Description)); for k = 1:length(turns) fprintf('%s. %s\n',num2str(k),turns(k).Description) end
1. Head southeast 2. Keep left at the fork, follow signs for I90 W/I93 S/Williams Tunnel/Mass Pike and merge onto I90 W Partial toll road 3. Take exit 13 to merge onto MA30 E/Cochituate Rd toward Natick Partial toll road 4. Turn right onto Speen St 5. Merge onto MA9 E/Worcester St via the ramp on the left to Boston 6. Slight right onto Apple Hill Dr 7. Turn left toward Apple Hill Dr 8. Take the 1st right onto Apple Hill Dr Destination will be on the right 9. The MathWorks, Inc., Natick, MA.
Your final destination is the MathWorks campus. You can obtain a highresolution image of the area from the United States Geological Survey (USGS) National Map Server using the Web Map Service (WMS) protocol. The function webread returns an RGB image when reading 'image/jpeg' content type. Specify the WMS query parameters to obtain the image of interest.
usgsURL = 'http://raster.nationalmap.gov/arcgis/services/Orthoimagery/USGS_EROS_Ortho/ImageServer/WMSServer'; service = 'WMS'; layers = '0'; crs = 'CRS:84'; contentType = 'image/jpeg'; height = 512; width = 512; request = 'GetMap'; latlim = [42.298530964,42.301131366]; lonlim = [71.353251832,71.348120766]; format = '%12.9f'; bbox = [ ... sprintf(format,lonlim(1)) ',' sprintf(format,latlim(1)), ',', ... sprintf(format,lonlim(2)) ',' sprintf(format,latlim(2))]; version = '1.3.0'; RGB = webread(usgsURL, ... 'SERVICE', service, 'LAYERS', layers, 'CRS', crs, 'FORMAT', contentType, ... 'HEIGHT', height', 'WIDTH', width, 'REQUEST', request, 'BBOX', bbox);
Display the image.
figure imshow(RGB)
I am hopeful that this post gives you an introduction to accessing RESTful web services using MATLAB. There are many RESTful web services available on the Internet. How might you use these tools in your work? Let us know here.
Best of luck in your job search!
Get
the MATLAB code
Published with MATLAB® R2014b
Idin's pick for this week is Koch curve by Dimitrios Piretzidis.
This week being a holiday week in the US, and start of winter in the Northern Hemisphere where our corner of the world (Natick, Massachusetts) is located, I figured I would pick something a little more fun and winter related, like a snowflake!
The “Koch Snowflake” or “Koch Star” is a wellknown fractal curve (actually one of the earliest fractals to be described). There are many places on the web where you can read about the Koch Snowflake:
The mathematical formula for the Koch snowflake isn't very complicated, and there are at least a halfdozen Kochrelated submissions I found on File Exchange. I picked this particular one out of the bunch because it worked straight out of the box, and it has a simple user interface that makes generating a snowflake as easy as entering an integer and pressing a button! Be warned, though, for any iteration order above 6 or 7, it will take a while to generate your snowflake! A close runnerup for me was this submission:
http://www.mathworks.com/matlabcentral/fileexchange/27577fractalcurves
This gives you more fractal functions, and the code is perhaps a bit more polished, but it's not as easy as a graphical user interface with a push button!
Happy winter, everyone!
As always, your thoughts and comments here are greatly appreciated.
Get
the MATLAB code
Published with MATLAB® R2015a
help greetings
% greetings Seasonal holiday fractal. % greetings(phi) generates a seasonal holiday fractal that depends % upon the parameter phi. The default value of phi is the golden ratio. % What happens for other values of phi? Try both simple fractions % and floating point approximations to irrational values.
which greetings
http://www.mathworks.com/moler/ncm/greetings.m
phi = (1+sqrt(5))/2 greetings(phi)
greetings((1+sqrt(17))/2)
greetings((1+sqrt(13))/5)
greetings(22/7)
greetings(.469)
... and best wishes for the new year.
Get
the MATLAB code
Published with MATLAB® R2014b
]]>
Jiro's pick this week is "Distribute Figures" by Anders Simonsen.
I must be obsessed with figure management, MATLAB figure that is. I've previously Picked three File Exchange entries around this topic: cascade, togglefig, and Useful Figure Management Utilities. No wonder, I was drawn to this entry by Anders.
Have you ever generated lots of figures and wished you could quickly view all of them on one screen?
Anders's distFig lets you do that. Let's say that you have four figure windows.
logo figure surf(peaks) figure plot(rand(10,3)) figure scatter3(rand(50,1),rand(50,1),rand(50,1))
Just call distFig, and it distributes your figures on the screen. Pass in some parameters to control how the figures are distributed.
distFig('Rows',2,'Columns',2)
Anders provides detailed help with the function, as well as some sample code demonstrating the various uses. I also appreciate his responsiveness to user comments. You can see that he quickly updated his entry to fix some issues that came up in R2014b. Great stuff, Anders!
Comments
Give this a try and let us know what you think here or leave a comment for Anders.
Get
the MATLAB code
Published with MATLAB® R2014b
As you can imagine, to debug and validate my work, I need to compare a lot of signals in my model with the original data I received.
As a first step, I am trying to validate the static behavior of the model. This means that I simulate the model for only one time step and validate that the outputs are as expected for different inputs.
If the output does not match, then I need to inspect intermediary signals to figure out where the difference originates.
Display blocks
The first, most obvious, way to see the value of a signal in Simulink is the Display block. In my case, using Display blocks quickly made my model look like:
As you can see, the model quickly becomes a mess. And since those displays are only temporary, I will need to remove them later.
Value Label Display
Another option that I often find more convenient to quickly view the value of a signal for debugging is the Value Label Display.
In the Display menu, I can decide if I want the label to be displayed when hovering over the block, or when clicking on it.
There are also options to control how often the labels are updated, and their accuracy. In my case, I chose the long format to see all the digits available.
Once this is enabled, you can simply click on blocks to enable the displays and simulate the model to see the values.
Once your debugging is completed, you can remove all the labels in one click from the Display menu.
Now it's your turn
What is your preferred method to to view signal values during debugging? Let us know by leaving a comment here.
]]>I'm pleased to have Dave Foti back for a discussion of subclassing and class hierarchies in MATLAB. Dave manages the group responsible for objectoriented programming features in MATLAB.
In computer science, a class represents a set of objects that share a common definition, usually including data and functions or behaviors. A subclass represents a subset of objects that share a more specific definition usually by adding or specializing data and/or functions defined in a superclass. In practice subclassing has been used to reuse functionality for a new and more specialized purpose. In recent years, this usage has become largely discouraged due to the tendency for introducing errors in programs that reuse superclasses (often called base classes) for applications not anticipated by the original class. A class author hides the inner workings of a class to make the class useful as an abstract representation. That same hiding of details can also hinder the ability of a programmer to anticipate how the class will function in a new role. Let's explore some cases where subclassing does make sense in MATLAB and how to avoid some common programming pitfalls.
One of the most obvious uses for subclassing in MATLAB is a situation requiring a model of a real physical system that has already been classified. In such a situation, it might make sense to build a software model that closely resembles the real physical classification system. For example, one could create objects based on the Standard Model of particle physics [1]. In this way, the classification system already exists and can simply be represented in software. There is still design work to be done in how best to represent a physical system in software for a specific purpose, but all the classes can be built at roughly the same time by the same person or team and thus avoid the pitfalls of reusing an existing class later on after its details have been forgotten. Here is what a superclass might look like for elementary particles in the Standard Model:
type ElemParticle
classdef ElemParticle % Intrinsic properties of the particle properties(SetAccess = immutable) Spin Charge Mass end % State variables for a particular particle properties Position = [0 0 0]; Velocity = [0 0 0]; end methods function p = ElemParticle(spin, charge, mass) p.Spin = spin; p.Charge = charge; p.Mass = mass; end end end
We can then create subclasses for fermions and bosons:
type Fermion type Boson
classdef Fermion < ElemParticle methods function p = Fermion(charge, mass) p@ElemParticle(.5, charge, mass); end end end classdef Boson < ElemParticle methods function p = Boson(mass) p@ElemParticle(0, 0, mass); end end end
We can divide the Fermions into Leptons and Quarks:
type Lepton type Quark
classdef Lepton < Fermion methods function p = Lepton(charge, mass) p@Fermion(charge, mass); end end end classdef Quark < Fermion properties Color end methods function p = Quark(charge, mass, color) p@Fermion(charge, mass); p.Color = color; end end end
In the above example, the superclasses and subclasses were designed as a single collection of classes so there is no need for superclasses to anticipate the needs of unknown future subclasses. Any future evolution of the superclasses needs to consider the whole system, but it is a bounded list of classes.
Even if one is not modeling a physical system with an existing classification model that makes sense in software, there are still other examples where a system of classes can be designed at roughly the same time to fulfill some purpose such as managing a variety of data formats or anaysis techniques. For example, we might have a need to visualize some data as points. We might know that we have two kinds of data sets: one that is twodimensional and one that is threedimensional. We design a class that handles 2D data and a subclass that adds support for a third dimension. Here is what the 2D class looks like:
type PointList
classdef PointList properties X Y end methods(Sealed) function show(pc) figure; data = getData(pc); opts = {'+', 'MarkerSize', 3}; if numel(data) == 3 plot3(data{:},opts{:}); else plot(data{:},opts{:}); end end end methods(Access = protected) function data = getData(pc) data = {pc.X, pc.Y}; end end end
Because we know that we want to support 3D data, we design our plotting routine to be able to handle 2D or 3D data. This is an advantage of knowing about all the subclasses in the beginning. Then when we write the 3D subclass:
type PointList3D
classdef PointList3D < PointList properties Z end methods(Access = protected) function data = getData(pc) data = {pc.X, pc.Y, pc.Z}; end end end
it is very easy to support 3D plots without rewriting the whole show method. Here we produce a 3D point cloud for the 3rd harmonic of the Lshaped membrane:
L = membrane(3); [X, Y] = meshgrid(1:length(L), 1:length(L)); pts = PointList3D; pts.Z = L(:); pts.X = X(:); pts.Y = Y(:); show(pts)
Another good reason to use subclassing is when a superclass is designed specifically for reuse. A common design pattern [2] is called the mixin pattern [3]. In this pattern a class is often used to implement one specialized piece of functionality. This class represents all objects that have this common functionality even though such classes may do other things very differently. A mixin class should expect nothing of a subclass except that it uses the mixin functionality. A common example of a mixin class in MATLAB is the class called matlab.mixin.CustomDisplay which is used to customize the MATLAB default object display while retaining appropriate consistency with other objects. Here is an example of this mixin being used on our ElemParticle superclass. To separate this version from the previous example, I've put the new version of the classes in a package called particle.
type +particle/ElemParticle
classdef ElemParticle < matlab.mixin.CustomDisplay % Intrinsic properties of the particle properties(SetAccess = immutable) Spin Charge Mass end % State variables for a particular particle properties Position = [0 0 0]; Velocity = [0 0 0]; end methods function p = ElemParticle(spin, charge, mass) p.Spin = spin; p.Charge = charge; p.Mass = mass; end end methods(Access = protected, Sealed) function propgroups = getPropertyGroups(p) propgroups = matlab.mixin.util.PropertyGroup; propgroups(1).Title = 'Intrinsic'; propgroups(1).PropertyList = getIntrinsicPropertyNames(p); propgroups(2).Title = 'State'; propgroups(2).PropertyList = {'Position','Velocity'}; end end methods(Access = protected) function pnames = getIntrinsicPropertyNames(~) pnames = {'Spin','Charge','Mass'}; end end end
Note that whenever functionality is designed for a superclass such as this, it is important to implement the functionality for the whole class of objects including all instances of subclasses. Thus, the display should not just show what is common to ElemParticle, but should provide for specialization in subclasses. In this case, we add one method that can be specialized by subclasses to provide lists of intrinsic properties. Since we know about all the subclasses, we don't need to worry about what kinds of property groups a future subclass might need. We can see why it is important for superclasses to be designed with the knowledge that subclasses will be created and why it is very helpful if the subclasses are already known during this design. In this case, we know the family of subclasses of ElemParticle and thus we know that they will sometimes need to add new properties, but not whole new groups of properties. We choose the right amount of generality in the superclass so that each subclass can specialize display with minimal effort. In fact Quark is the only particle that needs additional code as shown here:
type +particle/Quark
classdef Quark < particle.Fermion properties Color end methods function p = Quark(charge, mass, color) p@particle.Fermion(charge, mass); p.Color = color; end end methods(Access = protected) function pnames = getIntrinsicPropertyNames(p) pnames = [getIntrinsicPropertyNames@particle.Fermion(p), 'Color']; end end end
We can now create and display an electron and a top quark:
e = particle.Lepton(1, 5.11e4)
e = Lepton with properties: Intrinsic Spin: 0.5 Charge: 1 Mass: 0.000511 State Position: [0 0 0] Velocity: [0 0 0]
t = particle.Quark(2/3, 173.07, 'red')
t = Quark with properties: Intrinsic Spin: 0.5 Charge: 0.66667 Mass: 173.07 Color: 'red' State Position: [0 0 0] Velocity: [0 0 0]
Other common patterns where classes are designed for unbounded subclassing include interfaces [4] and frameworks [5]. Interface classes define how two pieces of code can interact with each other without actually defining how those interactions are accomplished. Because MATLAB is a dynamically typed language, it is possible to write functions to an interface that isn't actually defined as a class. For example many functions in MATLAB can work with any object that implements the size, subsref, and subsasgn functions to perform reshaping operations on arrays. In some cases, it may be useful to define interfaces explicitly so that it is easier to determine what kinds of objects will work with a particular function and so that it is easier to see the capabilities of a class in the explicit interfaces it supports.
When designing a class for extension in the future, it is a good idea to think about how subclasses might restrict evolution of the base class. Some questions to consider are:
When designing methods in the base class, it is important to consider how subclasses will reuse and as necessary adapt these methods while maintaining the principle that subclass instances can be substituted for base class instances [6]. Some questions to ask are:
Once a class has been presented as a base class, its fundamental definition should not change or else subclasses could become invalid or produce erroneous results. The new or changed behaviors and capabilities may not be appropriate for all subclasses. The subclass authors could not possibly anticipate changes to the superclass when the subclasses were created. This is one of the reasons for adhering to the open/closed design principle [7].
Given the importance of considering how a class will be extended by subclasses, MATLAB provides ways to signal that a class wasn't designed for subclassing. Classes with the Sealed attribute set can't be used as superclasses. If an entire class hierarchy was designed together but not intended for extension with additional subclasses, then it can be sealed using the Sealed attribute on the leaf classes and the AllowedSubclasses attribute on the superclasses. MathWorks will at times seal classes or class hierarchies of widely used classes to indicate that these classes have not been designed to support subclasses.
These are some thoughts on subclassing in MATLAB, but I'd like to hear your thoughts as well. Have you used any of the mixin MATLAB classes like matlab.mixin.CustomDisplay or matlab.mixin.Heterogeneous? Do you have suggestions for mixin classes that you would like to see in MATLAB? You can post ideas and comments here.
Get
the MATLAB code
Published with MATLAB® R2014b
An incredible book, published in several editions from 1909 to 1933, by German mathematicians Eugene Jahnke and Fritz Emde, contains definitions and formulas for mathematical functions, handcalculated tables of function values, and meticulous handdrawn 2 and 3dimensional graphs. An English edition was published by Dover in 1945.
I will include only two of their illustrations in this blog. Here is one of their graphs of the gamma function in the complex plane. The surface is the magnitude of the function, with its poles at the integers on the negative real axis. The contour lines are the modulus and phase. I can't imagine how they made this plot in 1933.
Here is my attempt to have MATLAB recreate the Jahnke and Emde plot. It is only partially successful. I do have little ellipses to cut off the spikes  that was tricky. But I haven't begun to label the contours, or shade the edge faces.
I love this plot. It is the complexvalued Bessel function of order zero, also known as the Hankel function. There is a pole, a zero, and a saddle point. Again, imagine doing this by hand. Or, having some grad student do it.
Here we can use color, and our new parula color map, in place of 3d. But we can't get the nice feel of the hand drawn lettering.
[X,Y] = meshgrid(4:1/128:2,1.5:1/128:1.5); H = besselh(0,X+i*Y); s = 3.2; contourf(X,Y,abs(H),0:0.2:s) hold on c2 = s*(angle(H)/(2*pi)+0.5); r2 = s*((180:10:180)/360+0.5); contour(X,Y,c2,r2,'k') hold off axis equal axis tight grid on
Eugene Jahnke and Fritz Emde, Tables of Functions with Formulas and Curves, (4th ed.), Dover, 1945
Get
the MATLAB code
Published with MATLAB® R2014b
A block unlike any other
Simulink Functions are unlike any other Simulink block, even other userdefined functions like FunctionCall Subsystems. For starters, the block does not use traditional ports. Instead, like MATLAB functions, you specify arguments.
Inside the Simulink Function, arguments are represented by uniquelooking Argument port blocks.
Since they aren’t traditional ports, you won’t find these blocks in the Library Browser. To add or remove arguments, you type right on the block’s mask:
To use a Simulink Function in your model, you employ a Function Caller block.
Simulink associates each instance of this block with a Function by the function name; it requires that every function name in the model hierarchy be unique.
Functions in referenced models
Speaking of hierarchy, one very practical use of Simulink Functions are in a referenced model context. Functions defined in a referenced model are in the scope of its parent. But, the referenced model has to follow some very special design rules, the same as they do when they contain FunctionCall Subsystems.
The simplest way to adhere to these rules is to create a model that contains nothing but your collection of Simulink Function blocks, like so:
Now add a Model block to your parent model, point it to this collection, and you can call any of these functions from the parent, or from any other model further up the hierarchy.
Generating code
Code generated from Simulink Functions is as modular and straightforward as you would expect. Each function is defined in its own source file. The function prototype always takes the form of:
void foo(type_T in1, type_T in2, type_T *out1, type_T *out2)
Simulink is smart enough to recognize when you are trying to emulate passbyreference, and you name your output the same as your input (like I did in the timestwo example in the images above). When you do, the generated function prototype looks just like this:
void foo(type_T *x)
Now it’s your turn
This opens up a whole new way to model. What kind of design patterns can you create?
]]>In the summer of 2013 we were closing in a choice for the new MATLAB colormap. We were down to tweaking and finetuning.
But ... we needed a name!
For my many experiments, I had fallen back on an old graduate school habit of naming things after characters from Lord of the Rings. So I had filenames such as gandalf_20130623a.m and faramir_20130712b.m. (Faramir is my favorite LOTR minor character.) I certainly knew that wasn't going to work for the final name. How to choose one?
I looked over the names of the existing colormaps:
jet  
hsv  
hot  
cool  
spring  
summer  
autumn  
winter  
gray  
bone  
copper  
pink  
lines  
colorcube  
prism  
flag 
Well, we've got a few identifiable themes in there:
And there are some oddball names, such as jet, lines, prism, and flag. None of that seemed inspirational for naming a new colormap.
I decided to look for something descriptive. But descriptive of what?
To remind you, here's what the new colormap looks like:
showColormap(parula,'bar')
I picked the main colors (to my eye, these are blue, green, orange, and yellow) in the new colormap and started doing searches using these color names and different kinds of objects. Animals seemed obvious. I actually started with fish, but that got nowhere fast.
Then I tried birds, and up popped the tropical parula:
As Wikipedia describes it, the tropical parula "has mainly bluegrey upperparts, with a greenish back patch and two white wingbars. The underparts are yellow, becoming orange on the breast."
Perfect! I didn't really think the team would go for it, though. I sent around an email with the name parula and a picture of the bird to a small group of people working on the visual appearance changes for the new MATLAB graphics system. Somewhat to my surprise, everyone said they liked it. Later, a larger group of senior MATLAB designers reviewed it, and they also liked it. So the name stuck.
That left us with one big problem, however. How is parula pronounced?
I don't know. In trying to find a definitive answer, I have only managed to confuse myself. As a result, I don't even always pronounce it the same way myself. In looking at various references for pronunciation of bird names, I have seen all of these variations:
I know two amateur birders at MathWorkers who told me definitively how to pronounce it. Of course, they each gave me a different answer.
I guess that most American English speakers would choose the first variation, which has the accent on the second syllable. That has certainly been the case at MathWorks headquarters in Massachusetts.
So I'll tell you what I tell MathWorkers here: you can pronounce it however you like!
Get
the MATLAB code
Published with MATLAB® R2014b
Complementary Blocks
In Simulink, there are a few blocks that always come in pairs, for example Goto and From, and Data Store Write and Data Store Read.
In R2014b, if you have one of these blocks, you can simply hover the mouse on top of the block, and a light blue indicator will appear. Click on it and drag to make the complementary block appear.
Now it's your turn
Were you already aware of this feature? Do you find it useful? Let us know by leaving a comment here.
]]>
]]>
Today I’d like to introduce guest blogger Sarah Wait Zaranek who works for the MATLAB Marketing team here at MathWorks. Sarah previously has written about a variety of topics. Mostly recently, she cowrote a post with me about the new webcam capabilities in MATLAB. Today, Sarah will be discussing datastore, one of the new big data capabilities introduced in MATLAB R2014b.
datastore is used for reading data that is too large to fit in memory. For this example, we will be reading in data from the vehicle census of Massachusetts. It is a catalog of information about vehicle registered from 2008 to 2011. The dataset contains information about individual cars registered including vehicle type, location where the vehicle is housed, rated MPG, and measured CO_2 emissions. You can learn more about the data and even download it yourself, here. I have renamed the files in the demo for clarity's sake, but this is where they came from originally.
As mentioned, a datastore is an object useful for reading collections of data that are too large to fit in memory.
datastore can work with a single file or a collection of files. In this case, we will be reading from a single file. Our file does not include variable names at the top of the file. They are listed in separate header file as defined below.
% Define Data File and Header File datafile = 'vehiclebig.csv'; headerfile = 'varnames.txt'; % Read in Variable Names fileID = fopen(headerfile); varnames = textscan(fileID,'%s'); varnames = varnames{:}; fclose(fileID);
We can now create our datastore by giving the name of the data file as the import to the datastore function. We also specify our datastore not use the first row of our file as variable names. We will set those variable names explicitly using the names found in the 'varnames.txt' file.
ds = datastore(datafile,'ReadVariableNames',false); % Set Variable Names ds.VariableNames = varnames
ds = TabularTextDatastore with properties: Files: { 'H:\Documents\LOREN\MyJob\Art of MATLAB\SarahZ\datastore\vehiclebig.csv' } ReadVariableNames: false VariableNames: {'record_id', 'vin_id', 'plate_id' ... and 42 more} Text Format Properties: NumHeaderLines: 0 Delimiter: ',' RowDelimiter: '\r\n' TreatAsMissing: '' MissingValue: NaN Advanced Text Format Properties: TextscanFormats: {'%f', '%f', '%f' ... and 42 more} ExponentCharacters: 'eEdD' CommentStyle: '' Whitespace: ' \b\t' MultipleDelimitersAsOne: false Properties that control the table returned by preview, read, readall: SelectedVariableNames: {'record_id', 'vin_id', 'plate_id' ... and 42 more} SelectedFormats: {'%f', '%f', '%f' ... and 42 more} RowsPerRead: 20000
Note that we haven't read in our data yet. We have just provided an easy way to access it through ds, our datastore.
A really nice thing about a datastore is that you can preview your data without having to load it all into memory. datastore reads the data into a table which is a data type in MATLAB designed to work well with tabular data.
data = preview(ds); whos data data(:,1:7) % Look at first 7 variables
Name Size Bytes Class Attributes data 8x45 21426 table ans = record_id vin_id plate_id me_id owner_type start_odom start_date _________ ______ __________ __________ __________ __________ ____________ 2 1 5.3466e+06 1.2801e+07 1 NaN '20111107' 4 1 5.3466e+06 1.1499e+07 1 NaN '20090827' 7 2 6.6148e+06 1.2801e+07 1 NaN '20111119' 9 2 6.6148e+06 1.1499e+07 1 NaN '20080701' 10 3 6.4173e+06 1.2801e+07 1 NaN '20111206' 1 1 5.3466e+06 1 1 30490 '20090901' 3 1 5.3466e+06 2 1 55155 '20101002' 5 2 6.6148e+06 3 1 5 '20080702'
By default, datastore will read in every column of our dataset. datastore makes an educated guess for the appropriate format for each column (variable) of our data. We can, however, specify a subset of columns or different formats if we wish.
We can specify which variables (columns) by setting the SelectedVariableNames property of our datastore. In this case, we only want to bring in 5 columns out of the 45.
ds.SelectedVariableNames = {'model_year', 'veh_type', ... 'curbwt','mpg_adj','hybrid'}; preview(ds)
ans = model_year veh_type curbwt mpg_adj hybrid __________ ________ ______ _______ ______ 2008 'Car' 3500 21.65 0 2008 'Car' 3500 22.54 0 2008 'SUV' 4500 16 0 2008 'SUV' 4500 17 0 2005 'Truck' 5000 13.29 0 2008 'Car' 3500 22.09 0 2008 'Car' 3500 21.65 0 2008 'SUV' 4500 16.66 0
We can adjust the format of the data we wish to access by using the SelectedFormats property. We can specify to bring in the vehicle type as a categorical variable by using the %C specifier. You can learn more here about the benefits of using categorical arrays.
ds.SelectedFormats; ds.SelectedFormats{2} = '%C' % read in as a categorical
ds = TabularTextDatastore with properties: Files: { 'H:\Documents\LOREN\MyJob\Art of MATLAB\SarahZ\datastore\vehiclebig.csv' } ReadVariableNames: false VariableNames: {'record_id', 'vin_id', 'plate_id' ... and 42 more} Text Format Properties: NumHeaderLines: 0 Delimiter: ',' RowDelimiter: '\r\n' TreatAsMissing: '' MissingValue: NaN Advanced Text Format Properties: TextscanFormats: {'%*q', '%*q', '%*q' ... and 42 more} ExponentCharacters: 'eEdD' CommentStyle: '' Whitespace: ' \b\t' MultipleDelimitersAsOne: false Properties that control the table returned by preview, read, readall: SelectedVariableNames: {'model_year', 'veh_type', 'curbwt' ... and 2 more} SelectedFormats: {'%f', '%C', '%f' ... and 2 more} RowsPerRead: 20000
We can use the read function to read in a chunk of our data. By default, read reads in 20000 rows at a time. This value can be adjusted using the RowsPerRead property.
testdata = read(ds);
whos testdata
Name Size Bytes Class Attributes testdata 20000x5 683152 table
After you read in a chunk, you can use the hasdata function to see if there is still additional data available to read from the datastore.
hasdata(ds)
ans = 1
By using hasdata and read in a while loop with your datastore, you can read in your entire dataset a piece at a time. We will put in a counter just to track how many read operations took place in our loop.
counter = 0; while hasdata(ds) % Read in Chunk dataChunk = read(ds); counter = counter + 1; end counter
counter = 825
By using reset, we can reset our datastore and start reading at the beginning of the file.
reset(ds)
Now that we see how to get started using datastore, let's look at 3 different ways to use it to work with a dataset that does not entirely fit in the memory of your machine.
If you are interested in only processing certain columns of your text file and those columns can fit in memory, you can use datastore to bring in those particular columns from your text file. Then, you can work with that data directly in memory. In this example, we are only interested in the model year and vehicle type of the cars that were registered. We can use readall instead of read to import all the selected data instead of just a chunk of it at a time.
ds.SelectedVariableNames = {'model_year', 'veh_type'}; cardata = readall(ds); whos cardata
Name Size Bytes Class Attributes cardata 16145383x2 161456272 table
Now that you have the data read into MATLAB, you can work with it like you would normally work with your data in MATLAB. For this example, we will just use the new histogram function introduced in R2014b to look at the distribution of vehicle model years registered.
figure histogram(cardata.model_year) hold on histogram(cardata.model_year(cardata.veh_type == 'Car')) hold off xlabel('Model Year') legend({'all vehicles', 'only cars'},'Location','southwest')
Another way to subset your data is to filter the data down a chunk at time. Using datastore you can read in a chunk of data and keep only data you need from that chunk. You then continue this process, chunk by chunk, until you reach the end of the file and have only the subset of the data you want to use.
In this case, we want to extract a subset of data for cars that were registered in 2011. The new variables we are loading in (e.g., q1_2011), contain either a one or zero. Ones represent valid car registrations during that time. So we only save the rows which contain a valid registration sometime in 2011 and discard the rest.
reset(ds) ds.SelectedVariableNames = {'model_year','veh_type',... 'q1_2011','q2_2011','q3_2011','q4_2011'}; data2011 = table; while hasdata(ds) % Read in Chunk dataChunk = read(ds); % Find if Valid During Any Quarter in 2011 reg2011 = sum(dataChunk{:,3:end},2); % Extract Data to Keep (Cars Registered in 2011) idx = reg2011 >= 1 & dataChunk.veh_type == 'Car'; % Save to Final Table data2011 = [data2011; dataChunk(idx,1:2)]; end whos data2011 figure histogram(data2011.model_year) xlabel('Model Year') legend({'cars registered in 2011'})
Name Size Bytes Class Attributes data2011 3503434x2 35036782 table
But what if we can't hold the subset of the data we are interested in analyzing in memory? We could instead process the data a section at a time and then combine intermediate results to get a final result. In this case, let's look at the % of hybrid cars registered every quarter (in terms of total cars registered). So, we compute a running total of the number of cars registered per quarter as well as the number of hybrids registered per quarter. Then, we calculate the final % when we have read through the entire dataset.
% Reset Datastore reset(ds) % Select Data to Import quarterNames = varnames(end15:end); ds.SelectedVariableNames = [{'veh_type', 'hybrid'} quarterNames']; % Read in Vehicle Type as a Categorical Variable ds.SelectedFormats{1} = '%C'; totalCars = zeros(length(quarterNames),1); totalHybrids = zeros(length(quarterNames),1); while hasdata(ds) % Read in Chunk dataChunk = read(ds); for ii = 1:length(quarterNames) % Loop over car model years % Extract Data idx = dataChunk{:,quarterNames(ii)}== 1 & dataChunk.veh_type == 'Car'; idxHy = idx & dataChunk.hybrid == 1; % Perform Calculation totalCarsChunk = sum(idx); totalHybridsChunk = sum(idxHy); % Save Result totalCars(ii) = totalCarsChunk + totalCars(ii); totalHybrids(ii) = totalHybridsChunk + totalHybrids(ii); end end
percentHybrid = (totalHybrids./totalCars)*100; figure scatter(1:length(percentHybrid),percentHybrid,'filled') xlabel('Inspection Year') ylabel('% Hybrids') % Label tick axes ax = gca; ax.TickLabelInterpreter = 'none'; ax.XTick = 1:length(quarterNames); ax.XTickLabel = quarterNames; ax.XTickLabelRotation = 45;
You can also use datastore as the first step to creating and running your own MapReduce algorithms in MATLAB. Learn more about running MapReduce algorithms with MATLAB here. Perhaps this topic will be a blog post in the future.
Do you think you can use datastore with your big data? Let us know here.
Get
the MATLAB code
Published with MATLAB® R2014b
Our plots of the first eigenfunction of the Lshaped membrane have changed several times over the last fifty years.
Ph.D. thesis. Two dimensional contour plot. Calcomp plotter with ballpoint pen drawing on paper mounted on a rotating drum.
Calcomp plotter. Math department individual study student project. Original 3d perspective and hidden line algorithms. Unfortunately, I don't remember the student's name.
The first MathWorks documentation and the first surf plot. Apple Macintosh daisy wheel electric typewriter with perhaps 72 dotsperinch resolution.
Laser printer with much better resolution.
Sun workstation with CRT display and color. Hot colormap, proportional to height.
A hand made lighting model. I have to admit this looks pretty phony today.
Good lighting and shading. But at the time this was just a nice graphics demonstration. It was a few more years before it became the official company logo.
A prototype 3d printer. Pure granular silicon, a few dozen grains per inch. (Thanks to Robin Nelson and David Eells for curating photos.)
Cleve's Corner, "The Growth of MATLAB and The MathWorks over Two Decades", MathWorks News&Notes, Jan. 2006.
Get
the MATLAB code
Published with MATLAB® R2014b
by Chaitanya Chitale
Are you new to MATLAB and looking for a way to get started? If so, check out the new MATLAB Onramp course.
MATLAB Onramp is an online course that provides a brief introduction to the MATLAB language. The course gives you handson MATLAB experience via the use of an integrated, webbased version of MATLAB, as shown below.
As you work through the course you can try out different solutions to the problems asked and get immediate feedback.
The MATLAB Onramp course covers the basics of importing data, manipulating arrays, creating visualizations, and much more.
The course takes approximately 2 hours to complete. However, you can take the course at your own pace. After starting it, you can leave and come back to it at any time.
The MATLAB Onramp course is complimentary with your purchase of MATLAB. To get started, head over to the MATLAB Academy site: https://matlabacademy.mathworks.com.
You can also access the course from MATLAB R2014b by selecting Help → MATLAB Academy from the Home tab.
Interested in other training? Explore the other training courses that are offered by MathWorks.
]]>An email thread about a daterelated design question had me thinking about terminology this week. (Note: There's nothing about image processing in today's post. Sorry!)
Specifically, I was thinking about the term time zone. In the U.S. you might see a date written this way:
November 21, 2004 1:02 PM (EST)
EST means Eastern Standard Time in the U.S. Or you might see a date written like this:
11/21/2004 8:30 (04:00)
Many people refer to EST or 04:00 as the "time zone." However, because I work across the hall from the chief designer and implementer (Hi, P^2) of the new datetime suite of functions in MATLAB R2014b, I know that these are more properly referred to as time zone offsets.
A time zone is a much more complex concept than a simple offset from UTC. For example, a time zone encompasses both current and historical rules for using Daylight Saving Time.
The time zone offset for a given time depends not only on the time zone but also both the year and the dayofyear.
The dependence on the dayofyear is pretty obvious. In the fall season where I live, the time zone offset depends on whether the day is before or after the first Sunday in November.
d1 = datetime('October 15, 2014','TimeZone','America/New_York')
d1 = 15Oct2014
tzoffset(d1)
ans = 4:00
d2 = datetime('November 15, 2014','TimeZone','America/New_York')
d2 = 15Nov2014
tzoffset(d2)
ans = 5:00
Most of the state of Arizona does not observe Daylight Saving Time.
d3 = datetime('October 15, 2014','TimeZone','America/Phoenix')
d3 = 15Oct2014
tzoffset(d3)
ans = 7:00
d4 = datetime('November 15, 2014','TimeZone','America/Phoenix')
d4 = 15Nov2014
tzoffset(d4)
ans = 7:00
The dependence of time zone offset on the year is somewhat less obvious. It turns out that Daylight Saving Time rules change occasionally. In the U.S. the rules were last changed in 2007. Let's compare the time zone offset for noon on Halloween for 2007 and 2006.
d5 = datetime('October 31, 2007 12:00 PM','TimeZone','America/New_York')
d5 = 31Oct2007 12:00:00
tzoffset(d5)
ans = 4:00
d6 = datetime('October 31, 2006 12:00 PM','TimeZone','America/New_York')
d6 = 31Oct2006 12:00:00
tzoffset(d6)
ans = 5:00
Now for just a little fun with MATLAB R2014b date calculations. Most of the U.S. changes from Daylight Saving time to Standard Time at 2:00 AM on the first Sunday of November. Given the year, how can we compute that?
Here's just one way. Start with 2:00 AM on November 1 of the specified year.
year = 2014; d = datetime(year,11,1,2,0,0)
d = 01Nov2014 02:00:00
Now use dateshift to move to the first Sunday.
dateshift(d,'dayofweek','Sunday') % You can also use 1 instead of 'Sunday'.
ans = 02Nov2014 02:00:00
My horoscope for today said "Your next blog will include an example of an anonymous function for no good reason." So here it is. Let's make an anonymous function that returns the exact time for the transition from Daylight Saving Time to Standard time in the U.S.
f = @(year) dateshift(datetime(year,11,1,2,0,0),'dayofweek','Sunday')
f = @(year)dateshift(datetime(year,11,1,2,0,0),'dayofweek','Sunday')
f(2014)
ans = 02Nov2014 02:00:00
f(2015)
ans = 01Nov2015 02:00:00
f(2016)
ans = 06Nov2016 02:00:00
Of course, this function only works for years between 2007 and whenever the U.S. Congress changes its mind again!
Next week, I promise, I'll get back to writing about image processing. Or maybe colormaps. Unless I write about something else.
Get
the MATLAB code
Published with MATLAB® R2014b
Reuse: Or how I learned to love the Bat Man Lego set…
Like many an engineer of my generation I grew up with the basic Lego® starter set: the 1x1, 2x2, 2x3, 1x8 and the massive 2x8. We considered the wheel a high tech bit of plastic. Years later when I first saw a custom set with its detailed instructions, I was outraged by what I assumed was the corporatesponsored crushing of my nephew’s creative possibilities. I scoffed, I scorned, I worried about the decline of western civilization if we fed these kids ships fully built!
My nephew had a whole raft of “complete” kits. From these kits he learned how to build houses and starships, bridges and mountains. What’s more he learned how to build in a way that was rugged and would survive the hardplay action of space ships crashing into erupting volcanos. One day I stopped and saw what he had built — a hybrid submarine from bits of the Bat Man street cruiser and the Millennium Falcon brought together with good old basic Legos. He christened it the Subbatfalc! It was awesome.
Figure 1 Not actual Subbatfalc
What's the link with Simulink?
So what does this have to do with MATLAB and Simulink? Well, everything. As engineers creating something new, we need to figure out where we start. Do we use the primitive blocks and create everything from first principals or do we find existing components and build from there?
MATLAB and Simulink gives you a wonderful mixture of basic and advanced blocks. Further, with objectoriented programing in MATLAB and Simulink Libraries and Model blocks in Simulink, we give you the ability to create your own advanced blocks (heck, if you use TLC we give you the basic plastic to craft anything).
However as a consultant in the Controls Design Area I have lost track of the number of times I have worked with a client who has built up a set of custom functions such as transfer functions, integrators, or table lookup algorithms. They’ve recreated basic blocks. Now, truth be told, in many cases these custom blocks had a slight performance edge over the builtin blocks. But customers also have to maintain and validate these blocks, taking time away from building something more important.
So how do you determine when to build from scratch and when to use builtin functionality? I would suggest the following 6 State chart.
Figure 2: Note remember to thank Sally!
My nephew took less than an hour to build the Subbatfalc. Having worked with the kits, he had the skills to build it from basic blocks but it would have taken him ten times longer. He made the rational choice to reuse; and so am I. So, here I am recanting my harangue against kits. It isn’t the end of civilization — it’s the start of more interesting time. Now, pardon me, I am going to go help him build the Mount Doom/Death Star mashup (Doom Star??) so we can see what happens when the Subbatfalc attacks!
Now it's your turn
How do you decide between using an existing complex block, versus building it from basic blocks? Let us know by leaving a comment here.
]]>The Method of Particular Solutions computes a highly accurate approximation to the eigenvalue we have been seeking, and guaranteed bounds on the accuracy. It also provides flexibility involving the boundary conditions that leads to the MathWorks logo.
In 196566, after finishing grad school, I spent a postdoctoral year at the ETH, the Swiss Federal Institute of Technology in Zurich. At the time, the faculty included three world renowned numerical analysts, Eduard Stiefel, Heinz Rutishauser, and Peter Henrici. Moreover, IBM had a branch of their research division in Ruschlikon, a Zurich suburb, where they held occasional seminars.
In October, Leslie Fox, from Oxford, visited the IBM center and gave a talk about the work two of his students, Joan Walsh and John Reid, were doing on the Lshaped membrane. Expansions involving Bessel functions that matched the singularity of the eigenfunction near the reentrant corner were coupled to finite difference grids in the rest of the domain. Shortly after Fox's talk, it occurred to me to use the Bessel function expansions over the entire domain and avoid finite differences altogether.
Within a matter of weeks I had done computations on the CDC 1604 mainframe at the ETH that had me very excited. To celebrate, I made an Lshaped ornament out of aluminum foil for the Christmas tree in our apartment.
Henrici contributed some theoretical background relating what I was doing to work of Stephan Bergman and a Russian mathematician I. N. Vekua. Together with Fox, we published a paper in the SIAM Journal on Numerical Analysis in 1967.
You don't need to know much about Bessel functions to appreciate the method of particular solutions. Using polar coordinates, a particular solution is any function of the form
$$ v_\alpha(\lambda,r,\theta) = B_\alpha(\sqrt{\lambda} r) \sin{(\alpha \theta}) $$
First of all, notice that the $\sin{(\alpha \theta)}$ portion insures that this solution satisfies the boundary condition
$$ v_\alpha(\lambda,r,\theta) = 0 $$
along the two rays with $\theta = 0$ and $\theta = {\pi}/{\alpha}$. For our Lshaped domain the reentrant corner has an angle of $\frac{3}{2}\pi$, so we will want to have
$$ \alpha = \frac{2}{3}j $$
where $j$ is an integer.
The Bessel function $B_\alpha$ is defined by an ordinary differential equation that implies any $v_\alpha$ satisfies the partial differential equation
$$ \Delta v_\alpha + \lambda v_\alpha = 0 $$
If $\alpha$ is not an integer, then $B_\alpha(r)$ has another important property. Its derivatives blow up as $r$ goes to 0. This asymptotic singularity is just what we need to match the singularity in the eigenfunction.
We can also take advantage of symmetries. The first eigenfunction is symmetric about the centerline of the L. We can also eliminate components that come from the eigenfunctions of the unit squares that are subdomains. This implies that for the first eigenfunction $\alpha = \frac{2}{3}j$ where $j$ is an odd integer that is not a multiple of 3, that is
$$ \alpha_j = \frac{2}{3}j, \ \ j = 1,5,7,11,13, ... $$
Form a linear combination of particular solutions.
$$ u(\lambda,r,\theta) = \sum_\alpha c_\alpha v_\alpha(\lambda,r,\theta) $$
This function satisfies our differential equation throughout the domain, satisfies the boundary condition on the two edges that meet at the reentrant corner, and has the desired singular behavior there. All we have to do is pick $\lambda$ and the coefficients $c_\alpha$ so that $u$ satisfies the boundary condition, $u = 0$, on the remaining edges of the L.
The general idea is to pick a number of points on the outer boundary and investigate the matrix obtained by evaluating the particular solutions in the linear combination at these points. Vary $\lambda$ until that matrix is nearly rank deficient. The resulting null vector provides the desired coefficients.
The details of how we carry out this general plan have varied over the years. The best approach is implemented in the function membranetx, available here, described in the PDE chapter of Numerical Computing with MATLAB. It uses the matrix computation workhorse, the SVD, the singular value decomposition.
The number of rows in the matrix involved is the number of points on the boundary and the number of columns is the number of terms in the sum. The figures below come from matrices that are 46by9. Let $(r_i,\theta_i)$ be the points chosen on the boundary. The matrix elements are
$$ a_{i,j}(\lambda) = B_{\alpha_j}(\sqrt{\lambda} r_i) \sin{(\alpha_j \theta_i)} $$
The function fmintx is used to vary $\lambda$ and find a minimum of the SVD of this matrix. The resulting matrix is essentially rank deficient and the resulting right singular vector is a null vector that provides the coefficients.
The minimum is found at
$$ \lambda = 9.639723844 $$
Here are the first four particular solutions. The vertical scale reflects the magnitude of the corresponding coefficient.
mps_figs
None of these four terms is remotely close to zero on the outer boundary, but when they are added together the sum is zero to graphical accuracy.
mps_figs(4)
So this is actually the first eigenfunction of the Lshaped membrane.
Now we can have some fun. Here is a surf plot of the sum of just the first two particular solutions.
mps_figs(2)
This function is an exact solution of the partial differential equation
$$ \Delta u + \lambda u = 0 $$
with the value of $\lambda$ reported above. It satisfies the boundary condition $u = 0$ on the two edges meeting at the reentrant corner, but it fails significantly on the outer edges. So it is only with some artistic license that we can refer to it as the first eigenfunction of Lshaped membrane.
When I first saw this surf plot, it didn't have this gorgeous color map, but I found the shape so attractive that I kept it around. The evolution of the shading, lighting and colors that turned this into our logo is the subject of my next blog.
The Fox, Henrici and Moler paper was not just about the method of particular solutions. We were also concerned about finding guaranteed upper and lower bounds for the exact eigenvalue of the continuous differential operator. A theorem proved in the paper relates the discrepancy in meeting the boundary condition to the width of a small interval containing the eigenvalue. In 1966 we proved
$$ 9.6397238_{05} < \lambda < 9.6397238^{84} $$
Today with membranetx on MATLAB, I am finding a value of $\lambda$ in the center of this interval.
[L,lambda] = membranetx(1);
format long
lambda
lambda = 9.639723844021997
In 2005 Timo Betcke and Nick Trefethen published a paper in SIAM Review that pointed out a serious shortcoming in the approach we described in the Fox, Henrici and Moler paper. When the method is attempted on other domains, especially domains with more than one singular corner, the matrix has a tendency to appear nearly rank deficient for all values of $\lambda$. It is difficult to identify local minima. The recommended cure is to add a number of interior points and require the approximate eigenfunction to be nonzero there.
Betcke and Trefethen also prove an approximation theoretic result that, when combined with their computations, adds four more digits to our rigorous bounds on the first eigenvalue.
$$ 9.63972384402_{16} < \lambda < 9.63972384402^{22} $$
Moreover, they are so confident in their numerical results that they are sure the exact eigenvalue lies in the center of this interval, and they are pretty sure they even know the next digit.
$$ \lambda \approx 9.63972384402195 $$
This agrees with what I get out of membranetx to almost full double precision accuracy. I think we finally have this fellow pinned down.
I recommend that anyone reading this blog who wants to know more about the method of particular solutions and computing the eigenvalues of the Lshaped membrane read the Betcke and Trefethen paper.
Leslie Fox, Peter Henrici, and Cleve Moler, Approximations and Bounds for Eigenvalues of Elliptic Operators, SIAM Journal Numerical Analysis 4, 89102, 1967, <http://blogs.mathworks.com/images/cleve/Fox_Henrici_Moler.pdf>
Timo Betcke and Lloyd N. Trefethen, Reviving the Method of Particular Solutions SIAM Review 47, 469491, 2005, https://people.maths.ox.ac.uk/trefethen/publication/PDF/2005_112.pdf
Cleve Moler, Numerical Computing with MATLAB, Chapter 11, Partial Differential Equations, 2004. <http://www.mathworks.com/moler/pdes.pdf>
Get
the MATLAB code
Published with MATLAB® R2014b
I hesitated for some time today about where to go next in my discussion about the new default colormap in MATLAB. I finally decided to spend a little time on the reactions so far. Those reactions have ranged from "Great, what took you so long?" to "I hate it; give me back jet!" In the Twitterverse, one scientist proclaimed "End of an era!" Another simply said "yuck." (Jet is still in MATLAB, by the way.)
Today I want to dig into some of those reactions a little bit. Along the way I'll show some experiments with a test pattern to explain some of my comments about jet.
To remind you, last time I summarized 15 years or so of criticisms about rainbow colormaps:
I have seen a claim that that rainbow incorporates a universally understood order based on associating blue with cold and red with hot. There's no doubt that this order makes sense for some people (although blue flame is hotter than red flame?). I quibble with the "universally understood" part, though. The dominant evidence is that individuals do not universally order the spectral hues the same way unless they refer to spectral order mnemonic, such as "Roy G. Biv." It is wellestablished, however, that individuals can easily and consistent order colors by relative brightness.
Another commenter said that the rainbow colormap has higher contrast than parula. Yes, this is true. Rainbow goes all the way from dark to bright in the bottom third of the data range, and it goes all the way from bright to dark in the top third of the data range. For some parts of the data range, therefore, there is higher visual contrast than with parula, which uses the entire data range to go from dark to bright. A price paid for this increased contrast, however, is the visual illusions and confusion caused by the reversed roles of dark and bright in different parts of the colormap.
Matteo Niccoli gives the following nice example of a rainbowbased visual illusion. The arrow in the righthand image below points to something that gives the strong but false impression of being a depression in the bone structure.
Image credit: Matteo Niccoli, "The Rainbow Is Dead ? Long Live the Rainbow!" Copyright 2014 MYCARTA. Used with permission.
And I showed another visual confusion example last time with these visualizations of two different sinusoidal patterns:
These visualizations give the impression that the two sinusoidal patterns have opposite phase, but in reality they have exactly the same phase.
Another commenter said that, with sufficiently familiarity, the bright "stripes" that are characteristic of the rainbow colormap can be an advantage for certain kinds of data. That might be true in some particular cases, but I don't think it's true in general. Let me show an example, using a test pattern, that illustrates how much difficulty the cyan and yellow stripes can cause in interpreting what should be easily recognizable data.
url = 'http://blogs.mathworks.com/images/steve/2014/eia2000cropped.png';
I = imread(url);
imshow(I)
colormap(jet)
Let me label some portions of the visualization and then ask some quetions about it.
text(35,90,'A','FontSize',25,'HorizontalAlignment','center',... 'BackgroundColor','white') text(125,90,'A','FontSize',25,'HorizontalAlignment','center',... 'BackgroundColor','white') text(215,90,'A','FontSize',25,'HorizontalAlignment','center',... 'BackgroundColor','white') text(190,320,'B','FontSize',25,'HorizontalAlignment','center') line([77 160],[225 225],'Color','white','LineWidth',5) text(65,225,'C','FontSize',25,'HorizontalAlignment','right',... 'BackgroundColor','white')
At periodic intervals, the dark blue vertical bars have a bright outline (see the points marked "A"). What is the data significance of these bright outlines? Especially when compared with the dark blue vertical bars that do not have a bright outline?
The vertical bars in between the "A" marks don't have bright outlines. Those bars alternate between dark blue and dark red, a lowcontrast combination that is a little difficult to focus clearly on (a redblue visual illusion called Chromostereopsis).
Above the "B", I see very thin vertical dark bars. When the bars are this thin, it's difficult to distinguish between the dark red and dark blue, which is unfortunate because those colors represent opposite ends of the data range. The bright outlines between the bars is broken up by a diagonal pattern of blue stripes.
The numbers around the border are oddly broken up and hard to decipher.
Let's investigate why some vertical bars are outlined and some not by looking at the data crosssection along the white path marked "C".
c = improfile(I,[77 160],[225 225]);
figure
plot(c,'*')
I'll superimpose two lines that show the data values corresponding to the bright cyan and bright yellow in the jet colormap.
hold on plot([1 84],[1 1]*0.375*255,'c','LineWidth',5) plot([1 84],[1 1]*0.625*255,'y','LineWidth',5) hold off
It is now apparent why some vertical bars are outlined but not others. In each transition from low to high and from high to low, there is one data value that is in between. When that one transitional data value happens to be close the data values mapping to cyan or yellow, the vertical bar is shown with a bright outline. Otherwise, there is no outline. Although the effect is visually dramatic and makes some regions of the data look quite different from others, the visual difference has no real data significance.
OK, now let's show this data using the parula colormap.
imshow(I) colormap(parula)
This visualization is trivially easy to interpret. The vertical bars in the middle are all the same. The digits are easy to read. The visual difference between low data values and high data values has high contrast and is easy on the eyes. And the data above the point marked "B" is seen to have a completely different form than the way it appeared with jet. The stripes have a lower spatial frequency and are seen to be slightly fanning out in direction instead of appearing vertical.
A valid complaint about this example is that it is completely artificial, with data rather unlike what people are likely to be looking at with jet or parula. I'd like to suggesting thinking about it this way, though: If jet can take familiar, easily interpretable data and render it almost unrecognizable, what can jet do to your own data?
Finally, I'll mention that a blog reader commented that using jet made it easy to distinguish positive and negative values from each other in a data set whose values are centered around zero. And yes, jet can be better than parula for that. Jet is bright and relatively unsaturated in the middle, and it is dark and saturated at both ends. These are characteristics often shared by colormaps designed specifically for highlighting a central, specific value and distinguishing the values above and below that. These are called divergent colormaps. In a future post I'll show where to find some good divergent colormaps and how to use them in MATLAB.
Get
the MATLAB code
Published with MATLAB® R2014b
Enabling Interface Display
In R2014b, you will notice a new entry at the top of the Display menu:
For a model that originally looks like this:
Enabling the interface display makes it look like this:
As you can see, this feature puts the emphasis on the inputs and outputs of the system.
Advantages of the Interface Display
For someone like me who receives new large Simulink models every day, this feature is very useful. When I explore new models, I can use the highlighting to trace sources and destinations of signals through multiple levels of nested subsystems. As you can see in the following animation, by zooming in, I can select one element of a bus signal and follow the highlighted signals to quickly find out where it is used.
Once the model has been simulated or updated once, the interface view displays for each port its data type and sample time, making it easier to understand the model.
One more thing to note. When in interface Display, you cannot edit the model. This is convenient when your goal is just to explore and understand the model without modifying it.
Now it's your turn
Give the new interface view a try and let us know what you think by leaving a comment here.
]]>Today, David Garrison, our guest blogger, will continue his series on the new graphics system in R2014b.
Here is Part 3 of the series.
In Part 1 of this series, I provided an introduction to the new MATLAB graphics system in R2014b with descriptions of a number of new features. In Part 2, I talked about one of the big changes in R2014b  graphics functions return MATLAB objects, not numeric handles. In this post, I will talk about compatibility considerations in using R2014b graphics.
This post is longer than my two previous posts. In the first section, I'll talk about the kind of changes we've made in R2014b graphics, why we made those changes, how they affect you, and where to go for more information. The second section describes visual changes in the new graphics system including a new colormap, new line colors in plots, and new sizes for plot titles and axis labels, and these affect everyone. The last section is about changes we've made that affect advanced graphics users and user interface builders. Most of you can skip the more advanced material in the last section.
When we talk about compatibility considerations we are talking about changes in R2014b that may not be compatible with code written before R2014b. These incompatibilities can affect your code in one of three ways.
First of all, let me say that here at MathWorks we take compatibility very seriously. We have a rigorous process for vetting potential changes in MATLAB that may affect existing MATLAB code. We evaluate them very thoroughly to determine the extent of their impact. We document those changes in our release notes and provide alternatives whenever possible. Those changes are necessary to allow us to continue to develop MATLAB.
One of our goals for the new graphics system was to introduce a new architecture that would allow us to build new graphics capabilities for the future. We couldn't continue with the old graphics architecture because it was too limiting. The new architecture has also allowed us to fix a large number of outstanding graphics bugs that have accumulated over the years.
Our second goal was to support existing graphics functionality and to minimize the disruption to MATLAB users. We did that by evaluating every change that would introduce an incompatibility in the graphics system. We tested those changes in house with over 100,000 MATLAB files and in person with a large number of advanced graphics users to determine the following:
The results of all this testing allowed us to do two things. First, it allowed to make changes in the new graphics system before the release to minimize the effect of certain incompatibilities. Second, it told us what kind of documentation and tools would be required to help people make the transition.
Anyone using graphics will notice a visual difference in R2014b. People who've seen R2014b tell us the new visual appearance is a big improvement over previous versions of MATLAB. Differences include changes to colormaps, line colors, font sizes, grids, and other visual properties. For most people that's it. That's all you need to know.
Those of you who do advanced graphics programming and build user interfaces may be affected by incompatibilities that will require changes to your code. I'll talk about the most common ones in the last section of this post. We've also provided plenty of documentation and some tools to help you make the transition.
In preparation for the release of the new graphics system, we created a lot of material to help people understand the changes and make any modifications to their code. Here is a list of resources for information on R2014b graphics:
Anyone who uses MATLAB graphics in R2014b will see that plots look quite different. In this section, I will show you some of the big visual differences. As we go through these examples, I think you'll agree that the changes in R2014b are a big improvement over previous MATLAB versions. Of course, there may be times when you may want to change some of these visual properties. I will either tell you how to do that or I will provide a link to the documentation for more information.
In MATLAB, the colormap is a matrix of RGB color values used to set the colors for images and surfaces. For example, when you call the surf function you get a plot where the surface color is proportional to the height of the plot. Colors are based on the values from the colormap.
In R2014b, we introduced a new colormap called parula which is now the default. In previous versions of MATLAB, the default colormap was jet. Here's a comparison of the colormaps between R2014a and R2014b.
We changed the default colormap to address problems with rainbow colormaps like jet. The new parula colormap is ordered from dark to light and is perceptually uniform. For more information about the issues with rainbow colormaps like jet, see the post on Steve Eddins blog called A New Colormap for MATLAB – Part 2 – Troubles with Rainbows. You can use the colormap function to change the colormap to whatever you want it to be. For example, to change the colormap to jet use the following command after you create your plot.
colormap(jet)
We've also changed the lines colors used for plots. These line colors were selected to make lines easier to distinguish and to help people with certain types of color blindness.
The line colors used in plots are controlled by the ColorOrder property of the Axes object. One other note regarding line colors in plots. In R2014b, when plot is called with hold on, MATLAB will use the next color in the color order for each call to plot. That gives plots with different line colors. In previous versions, each call to plot would start over in the color order so that each line would be the same color.
For information on controlling the colors in the ColorOrder see the documentation section Why Are Plot Lines Different Colors?
One last change regarding appearance of plots. In R2014b, plot titles and axis labels use a larger font size than in previous versions of MATLAB and plot titles are bold by default.
I intentionally used a very long title in this plot to make a point. In this case, the title is so long it extends beyond the boundaries of the axes. You can control your plot title and axis labels using the TitleFontWeight, TitleFontSizeMultiplier, and LabelFontSizeMultiplier properties. For more information see How Do I Make the Graph Title Smaller?
That covers the major visual changes for plots in R2014b. The next section covers changes that only affect advanced graphics users and GUI builders.
Most of you can stop reading here and get on with the rest of your day.
If you are still reading, I assume you do some advanced graphics programming in MATLAB or maybe you build MATLAB user interfaces. In this section I'm going to cover the four most common nonvisual changes in R2014b graphics that you are likely to encounter. If you do encounter one of these, you will probably have to make changes to your code.
I mentioned this briefly in Part 2 of this series. Graphics functions now return objects, not numeric handles. The R2014b documentation has detailed information about this subject in the section called Graphics Handles Are Now Objects, Not Doubles. I will give a couple of simple examples to illustrate what can happen with code written before R2014b.
Prior to R2014b, you could store a set of handles to graphics objects in an array and then add some numeric data to that array. In R2014b, that will cause an error.
x = pi:0.1:pi ;
y1 = sin(x);
y2 = cos(x);
myLines = plot(x,y1,x,y2) % plot returns an array of two Line objects
If you then try to set myLines(3) = 1.2, you get the following error.
Cannot convert double value 1.2 to a handle
MATLAB won't let you add numeric values to an array of graphics objects. A similar problem occurs if you try to use an object handle in a function where MATLAB expects a numeric value. A simple example of this happens with the sprintf function.
a = sprintf('You clicked on figure %d\n', gcf);
The %d specification in the sprintf format string expects an integer value. However, since gcf is a figure object, you get the following error.
Error using sprintf Function is not defined for 'matlab.ui.Figure' inputs.
Here is one final example. Because graphics handles used to be numbers, you could use them in logical expressions.
if (get(0, 'CurrentFigure')) disp(['Figure ' get(gcf, 'Name')']) % display the figure name for gcf else disp('No open figures') % there is no open figure end
This worked in earlier versions of MATLAB because get(0,'CurrentFigure') would return either an empty array or a numeric figure handle. Both of these values are valid in the logical test of the if statement above. In R2014b, this will cause an error.
Conversion to logical from matlab.ui.Figure is not possible.
We have tried to maintain compatibility with previous releases in some cases. For example, you can still use 0 to refer to the graphics root in functions like get and set. As a best practice, however, we now recommend using the groot function to get the graphics root. Similarly, we still support the use of literal integer values to refer to figures in functions like set, get, and figure. Again, the best practice is to use a variable which contains the object when using these functions.
If you find yourself really stuck, it is possible to cast object handles to numeric handles using the double function. You can then cast the number back to an object handle using the handle function. We don't recommend this as a long term solution. Be aware that we may choose to remove this feature in a future version of MATLAB. If we do, we'll let you know in advance.
In earlier versions of MATLAB, the legend and colorbar functions created objects whose type was 'axes'. Technically, legends and colorbars were subclasses of axes. In R2014b, the legend function creates a Legend object and the colorbar function creates a ColorBar object.
So what does that mean? First, the results of a findobj call in R2014b may be different than in previous versions. In R2014b, the command
findobj('Type', 'Axes')
will not return any legend or colorbar objects. To get those objects you will need to do the following:
findobj('Type', 'Legend') findobj('Type', 'ColorBar')
Second, legends and colorbars cannot become the current axes. Prior to R2014b, you could write code that looks like this:
plot(1:10) cb = colorbar; axes(cb) % Make the colorbar the current axes title('My Colorbar') % set the title of the colorbar
If you try to run this code in R2014b you will see an error.
Error using axes Handles of type ColorBar cannot be made the current axes
In R2014b, you can still use the title function. Just give it the colorbar handle as the first argument.
title(cb, 'My Colorbar')
Alternatively you can use the colorbar's Label property to get the text object for the label and set its string property.
cb.Label.String = 'My Colorbar'
Charting functions like bar, contour, stem and others return one or more graphics objects. In previous versions of MATLAB, those objects had names like barseries, contourgroup, and stemseries. These objects were each a special type of object called an hggroup. Each of these hggroup objects had a set of children. The children were low level objects like lines or patches. Here is an example from R2014a.
[~,h] = contour(peaks, 'LineLevels', 6:1:8) ; get(h, 'Type') % an hggroup object is returned by the contour function
ans =
hggroup
ch = get(h, 'Children') ; get(ch(1), 'Type') % the children of the hggroup are patch objects
ans =
patch
Prior to R2014b, the contour command returned an hggroup object that had children that were patch objects (one for each contour line). Using the patch objects, it was then possible to do interesting things with the contour lines. You could, for example, make the even numbered contour lines solid and make the odd numbered contour lines dashed as shown below.
As you've probably guessed, things are different in R2014b. First, the types of the objects returned by these functions are different. For example, the bar function returns a Bar object and the contour function returns a Contour object. Second, the objects returned by these functions no longer have any children. That means you can't get to the lowlevel objects.
So what do you do? You have to take a different approach. In the code below, I've create two contours  one with solid lines and one with dashed lines. This code will create a plot like the one shown above. It works in R2014b and in previous versions of MATLAB.
major = 6:2:8; minor = 5:2:7; [~,hmajor] = contour(peaks,'LevelList',major); % contour with evennumbered levels
hold on [~,hminor] = contour(peaks,'LevelList',minor); % contour with oddnumbered levels set(hminor,'LineStyle',':') % make the oddnumbered levels dotted hold off
There is one last change that I want to talk about. It can affect MATLAB GUIs created before R2014b. Suppose I have a simple GUI with a panel and a button. I might have code that looks something like this:
figure('NumberTitle', 'off', 'Name', version('release'), ... 'Position', [100 100 350 260], 'MenuBar', 'none', 'Toolbar', 'none') ; button = uicontrol('Style', 'pushbutton', 'String', 'My Button', ... 'Units', 'normalized', 'Position', [0.4 0.5 0.25 0.15]) ; panel = uipanel('Position', [0.10 0.10 0.8 0.8], 'Title', 'My UIPanel') ;
Seems OK right? Here's what it looks like in R2014a and R2014b.
So what happened here? I don't see my button in R2014b. Where did it go? It turns out that the button is still there but in R2014b it is drawn under the panel.
In previous versions of MATLAB, uicontrols were always drawn on top on uipanels regardless of the order in which they were created. In R2014b, the two components are drawn in creation order. Since the panel was created after the button, it is drawn on top. What I really want is to have the button be a child of the panel. In order to do that, I can change my code to look like this:
figure('NumberTitle', 'off', 'Name', version('release'), ... 'Position', [100 100 350 260], 'MenuBar', 'none', 'Toolbar', 'none') ; panel = uipanel('Position', [0.10 0.10 0.8 0.8], 'Title', 'My UIPanel') ; button = uicontrol('Parent', panel, 'Style', 'pushbutton', 'String', 'My Button', ... 'Units', 'normalized', 'Position', [0.4 0.5 0.25 0.15]) ;
You can also fix this problem in GUIDE by moving the button out of the panel and then moving it back in. This will automatically make the button a child of the panel.
You can check your existing GUIs by downloading a tool from Tools for transitioning to R2014b Graphics. For more information, see Why Are Some Components Missing or Partially Obscured?
Read the materials I've listed above to learn more about the changes to graphics in R2014b and how they might affect your MATLAB code. Download the tools and run them on your code and your GUIs. If you aren't able to find the information you need to work around one of the changes in R2014b, contact our Technical Support Team . They are very familiar with the changes in R2014b and can help you find a solution.
Have you encountered any of these changes in R2014b? Have you been able to find the information you need? Have you been able to use this information to make necessary changes to your code? Have you tried the transition tools? We'd love to hear your thoughts here.
I am hopeful that this series of posts has given you a good introduction to the R2014b graphics system. There's a lot of new stuff to digest in this release. Try these things out and let me know what you think.
Thanks to Loren for letting me take over her space for the last few weeks.
Get
the MATLAB code
Published with MATLAB® R2014b
The Partial Differential Equation Toolbox contains tools for the analysis of PDEs in two space dimensions and time. It is perhaps not surprising that one of the primary examples involves the Lshaped membrane.
If you have the PDE toolbox installed, bring up pdetool. Click on the first triangle in the toolbar. This initializes a tool for creating a planar domain with an unstructured mesh. The default domain is our L. The default mesh has equally spaced points on the boundary connected by an irregular grid in the interior with 150 nodes and 258 triangles.
The PDE tab allows the specification of a general elliptic operator with variable coefficients, but if we accept the defaults we have our desired Laplacian. Here is the first eigenfunction, obtained with the coarse grid, plotted with the default cool colormap. The first eigenvalue, reported in the title, is 9.9707.
Refine the grid three times to one with 8417 nodes and 16512 triangles, change to our new parula colormap, and add contour lines. The reported eigenvalue is 9.6525.
The pdetool has an option to plot the absolute value of the gradient. We see that the singularity at the origin acts like a black hole, sucking in all the color.
The mesh refinement is accomplished by adding grid points at the midpoints of the triangles, thereby quadrupling the number of triangles. It is possible to do this five times, producing a fine mesh with 258 * 4^5 = 264192 triangles. (Actually the mesh can be refined six times to one with over a million triangles, but the subsequent eigenvalue calculation runs out of memory.)
Here are the results for the first eigenvalue of the Lshaped membrane obtained with successive bisections of the unstructured triangular mesh.
format long load pdetool_results L
L = 9.970747465677787 9.745769128365856 9.675647712787020 9.652453140975609 9.644395207950412 9.641482807142362
These values are candidates for acceleration by Aitken's deltasquared process.
$$ x_n  \frac {(\Delta x_n)^2}{\Delta^2 x_n} $$
d1 = diff(L,1); L2 = L(3:end)  d1(2:end).^2./diff(L,2)
L2 = 9.643895738979518 9.640988739828559 9.640105597452624 9.639834371546435
We can even use deltasquared a second time, although this is not often justified.
t1 = diff(L2,1); L2(3:end)  t1(2:end).^2./diff(L2,2)
ans = 9.639720224107141 9.639714153353552
Compare this with the value I reported in my previous post, obtained by extrapolating from a regular square grid.
lambda1 = 9.639723753239963
We will see in my next post a much better way to compute the eigenvalue that will provide a result to nearly full double precision accuracy.
Get
the MATLAB code
Published with MATLAB® R2014b
by Wendy Fullam
Have you ever found yourself thinking, “I wish there was a place where I could go to see a bunch of MATLAB examples…”?
MathWorks has just what you’re looking for. We just launched a big new feature on the Support page: MATLAB Examples. On it, you can discover thousands of code examples for MATLAB and Simulink: data manipulation, signal processing, machine learning, statistics, finance, computational biology, finance, you name it. Maybe you’ll even discover a few things that you didn’t know were possible.
MATLAB Examples is a single destination to find highquality code examples, including those authored by both MathWorks staff and contributors to the File Exchange. Here’s what the main page looks like.
What can you do here?
You can browse by topic area:
You can search by keyword, and filter the results:
After finding an example that you want to use, you can quickly see what product contains that example:
or access the code:
You can also find related examples:
The Help Team at MathWorks hopes you enjoy the new interface and welcomes your thoughts and reactions. Check out the new area and then let us know what you think in the comments below!
]]>If you're going to be there, come see my MATLAB Techniques for Image Processing talk on Wednesday, 12:45  13:45. Or stop by the MathWorks booth.
Now you'll have to excuse me while I finish preparing my talk.
]]>The first one I saw was Cleve's followup post on the eigenvalues and eigenfunctions of the Lshaped membrane, the basis of the MathWorks logo.
The last image from Cleve's post is a lovely contour plot of one of the eigenfunctions of the shape, rendered using the new parula colormap I've been discussing this month.
The second was the hilariously overthetop hightech Friday afternoon cookie detector created by Tom Lowell and posted on Loren's blog. Tom uses an IP camera on the ceiling and Image Processing Toolbox functions and the new webread function in R2014b and some File Exchange contributions to solve a difficult problem faced by MathWorkers in Natick, Massachusetts at the end of every week.
]]>
Last week I showed you the new MATLAB colormap, parula. Parula has replaced jet as the default colormap in R2014b, which was released earlier this month.
This week I want to explain some of the motivations for replacing jet. Jet is an example of a rainbow colormap. A rainbow colormap is based on the order of colors in the spectrum of visible light. Some of you might know the mnemonic "Roy G. Biv" for remembering the order of colors: red, orange, yellow, green, blue, indigo, and violet.
Multiple methods exists for constructing rainbow colormaps, so they don't all look alike. Here is jet, the MATLAB version:
showColormap('jet','bar')
In my introductory post last week, I showed you a few visualization examples using jet, and I asked you some questions about them. Let's look at the questions again and figure out the answers.
Question 1: In the chart below, as you move from left to right along the line shown in yellow, how does the data change? Does it trend higher? Or lower? Or does it trend higher in some places and lower in others?
fluidJet hold on plot([100 160],[100 135],'y','LineWidth',3) hold off colormap(gca,'jet')
By my count there are six very distinct color changes as you move along the line from left to right. So what does that mean about the data?
I posed this question to someone who is very familiar with the jet colormap, and I was surprised by the response: "The data starts low, goes high, and then goes back low again."
"But what about all those other color stripes," I asked? "What do they mean?"
"That's just what jet does. I've learned to ignore all those stripes."
Oh! Well, it turns out that some of the color stripes are indeed mostly meaningless. But not all of them!
Here's what the data actually does along that path.
improfile(s.X,[100 160],[100 135])
Yes, the data starts low, goes high, and then goes back low. But if you ignore all the color stripes in between the blue and the red, then you miss the presence of the smallermagnitude peak on the left. Alternatively, if you believe that all the color stripes are meaningful, then the yellow stripes seem to falsely suggest distinct data regions to the left and the right of the main peak.
Now let's compare with parula.
fluidJet colormap(gca,parula) hold on plot([100 160],[100 135],'y','LineWidth',3) hold off
You can see that now there are only four distinct color changes as you follow the yellow line. These changes correspond directly to real features in the data. There are no extraneous color stripes that do not correspond to real data features.
Question 2: In the filled contour plot below, which regions are high and which regions are low?
filledContour15 colormap(gca,rgb2gray(jet))
In the plot above, I have done a crude simulation of what a jetbased visualization looks like if printed on a grayscale printer. Here's the full color version:
colormap(gca,jet)
If you are familiar with jet and know therefore that blue is low and red is high, then you can probably look at the fullcolor version and give a reasonable interpretation of it. When rendered as grayscale, however, all hope is lost. The red and blue colors of jet are just about equally dark.
Here are the fullcolor and grayscale results with parula.
filledContour15 colormap(gca,parula)
filledContour15 colormap(gca,rgb2gray(parula))
With parula, dark consistently means low, and bright consistently means high.
The last questions relate to the three plots below (A, B, and C) showing different horizontal oscillations.
Question 3: Which horizontal oscillation (A, B, or C) has the highest amplitude?
Question 4: Which horizontal oscillation (A, B, or C) is closest to a pure sinusoid?
Question 5: In comparing plots A and C, which one starts high and goes low, and which one starts low and goes high?
subplot(2,2,1) horizontalOscillation1 title('A') subplot(2,2,2) horizontalOscillation2 title('B') subplot(2,2,3) horizontalOscillation3 title('C') colormap(jet)
The answers are subjective and might depend on your monitor and ambient lighting. But here's what I see.
The data pattern in image A has significantly higher visual contrast than either image B or image C. That suggests that the data oscillation in A has the highest amplitude.
The data patterns in images B and C both exhibit vertical stripes within each period of the oscillation. That strongly suggests that horizontal oscillations shown in B and C have some constant or almostconstant regions and that they go up and down in something like a stairstep pattern. To my eye, image A looks the most like a pure sinusoid.
And finally, the pattern in image A starts with a bright stripe on the left, whereas the image B pattern starts with a dark stripe. That suggests that oscillation A starts high, while oscillation B starts low (or maybe vice versa).
You probably won't be surprised to find out that these are all trick questions. Actually, oscillations A, B, and C are all sinusoids with exactly the same amplitude and shape. They differ only in their constant offsets, which place them in different regions of the jet colormap. Note especially that oscillations A and C are actually in phase with each other; they both start high and go low. The differences we see are all visual artifacts caused by the jet colormap.
Compare the result using parula:
colormap(parula)
With these examples, I wanted to show several ways in which the jet colormap, and rainbow colormaps in general, can mislead the viewer about properties of the data.
It turns out that some people have been writing about these problems with rainbow colormaps for years. Here's a summary of the main criticisms:
For a much more detailed summary of the literature and online sources regarding rainbow colormaps, see the paper "Rainbow Color Map Critiques: An Overview and Annotated Bibliography" on mathworks.com.
Next time I'll starting getting into colormap construction principles.
Get
the MATLAB code
Published with MATLAB® R2014b
by Toshi Takeuchi
One of the new R2014b features that deserves your attention is Git integration. Git is a source control system (also known as version control or source code management system) that enables collaborative software development. Why does that matter to you? Programming is an essential skill in many technical fields even outside computer science, and some universities now offer software carpentry workshops to enhance coding skills for researchers. Source control is one of those essential skills in software carpentry.
Until now, you may have tinkered alone with code you needed for your project. However, there are other people who may be working on similar problems and they may be writing similar programs. Source control enables you to work with other people so that you don’t have to do it all alone. Collaboration lets you be more productive in other aspects of your project.
Even if you don’t care about such collaboration, wouldn’t it be cool to share your personal project and see other people using it? They may even fix bugs and improve your code for you!
GitHub is one of the most popular websites that host Git repositories. The best place to share your MATLAB projects is File Exchange because of its popularity with the MATLAB user community. And guess what – File Exchange is integrated with GitHub! Now you see the connection?
What is a Git repository? A repo (repository) is a directory that holds your source code and any associated files. Local repos are on your local drive, and the remote repos are on GitHub or other hosts, and you sync the local repos to remote repos as you write your code. You can start with either the local or remote repos, but in this example I am going to start with a remote repo.
The process looks like this for a single developer:
When you talk about Git, you cannot go without mentioning “forking”. In the simplest terms forking means copying someone else’s public repo on the remote server, rather than starting a repo from scratch. In practice forking is used as a way to contribute to the existing projects or to start a new project using an existing project as a starting point. Once you make changes to your forked project, you can send a merge request to the original developer, and your changes may be accepted and incorporated into the main project.
Forking enables a flexible distributed style of collaboration and number of forks you have on your project acts as a measure of popularity – similar to the count of likes or followers on Facebook or Twitter. The social aspect of forking is an interesting topic on its own, but we need to skip it for this post.
Signing up with GitHub is easy – just click the sign up button on the homepage and follow instructions. A free account would do for now. You also need to download and install Git. Even though GitHub has GUI apps for Windows and Mac, you need to set up the commandline tool for use with MATLAB. You also want to follow these instructions for registering binary files with Git.
Creating a repo on GitHub is very easy – just follow these instructions. From this point on I will assume you named your repo HelloWorld and initialized it with a README file. Please note that you can only create a public repo with a free account.
Until recently, you needed to use the command line tool for this step, but starting with R2014b we can just use MATLAB’s Current Folder window. No more Git commands like git init, git status, git add, or git commit!
Open your copy of MATLAB and create an empty folder. Rightclicking the empty space in the Current Folder window to bring up a contextual menu, and select Source Control > Manage Files.
This will open a new dialog box: Manage Files Using Source Control.
You now see a new dialog box: Select a Repository. Copy and paste the URL of the remote repo you just created. You can find the URL in the right sidebar of your new repo on GitHub.
You choose either SSH or HTTPS depending on how you setup your authentication on GitHub.
Click Validate. You may be asked for your login password for authentication. You can close the dialog box when your path is validated.
Back in Manage Files dialog box, the sandbox should be already set to your current folder. All you need to do now is hit Retrieve.
You have now successfully cloned the remote repo to your local drive. Check your Current Folder window. You should see just one file – README.md, but with a green circle next to it. This is just a text file but you can apply wikilike syntax called markdown to make it appear like a regular web page when viewed on GitHub. README serves as the front page of your repo on GitHub.
Let’s add a new MATLAB script file helloworld.m. It will appear with a blank circle – it means it is not added to Git source control yet. To add it to Git, rightclick on the file and select Add to Git. The empty circle changes to “+” symbol. When you modify a file already under source control, the symbol becomes a blue square.
You can continue editing files as you like, but at some point, you want to take a snapshot of the edits you made. That’s when you do a commit. You can select any empty space in the Current Folder window to bring up the contextual menu and select Commit to Git Repository. This will bring up a dialog box where you can add your comment about the changes you made since the last commit. Comments will be helpful to keep to track of your changes and revert back to earlier commits if necessary.
When you commit, the snapshot is saved in the local repo, but it is also a good idea to mirror the changes to the remote repo as well. To do so, bring up the contextual menu by rightclicking an empty space in the Current Folder window and select Push. That will push your changes to the remote repo. You may need to enter your password.
The real power of source control comes from the ability to create multiple branches from your project. By default, you have a branch called “master” in your repo. You can create a new branch from the master branch, makes changes, and then merge those changes back to the master. This mechanism is used for working on new experimental features without affecting the working code on the master. You can branch and merge in MATLAB but the details are beyond the scope of this post.
If you have been curious about Git but put off by its complicated command line interface, Git integration in R2014b makes Git much more pleasant and approachable. I hope this quick introduction motivates you to take advantage of this new feature. When you do, please don’t forget to post your project to the File Exchange. To learn more about Git, it is actually helpful to start with reviewing the underlying concept about how Git works.
]]>
We are very excited to announce that MATLAB r2014b introduces a new graphics system. Graphics developer, Mike Garrity, is launching a graphics specific blog. Here is the landing page for this release of MATLAB and for the new graphics system in general.]]>
m = mobiledev;
m.AccelerationSensorEnabled = 1;Step 2: Send Data to MATLAB
m.Logging = 1;You can verify this in MATLAB, note the Current Sensor Values in the result:
m
m = mobiledev with properties: Connected: 1 Logging: 1 InitialTimestamp: '02Oct2014 21:53:26.707' AccelerationSensorEnabled: 1 (20 Logged values) AngularVelocitySensorEnabled: 0 MagneticSensorEnabled: 0 OrientationSensorEnabled: 0 PositionSensorEnabled: 0 Current Sensor Values: Acceleration: [0.2631 5.9226 8.1850] (m/s^2)Step 3: Stop Acquiring Data and Retrieve Logs
m.Logging = 0;To retrieve the data, use the accellog variable:
[a, t] = accellog(m);Step 4: Plot Raw Sensor Data
plot(t, a); legend('X', 'Y', 'Z'); xlabel('Relative time (s)'); ylabel('Acceleration (m/s^2)');
x = a(:,1); y = a(:,2); z = a(:,3); % Calculate and plot magnitude. mag = sqrt(sum(x.^2 + y.^2 + z.^2, 2)); plot(t, mag); xlabel('Time (s)'); ylabel('Acceleration (m/s^2)');
% Accounting for gravity. magNoG = mag  mean(mag); % Plot magnitude. plot(t, magNoG); xlabel('Time (s)'); ylabel('Acceleration (m/s^2)');
% Use FINDPEAKS to determine the local maxima. minPeakHeight = std(magNoG); [pks, locs] = findpeaks(magNoG, 'MINPEAKHEIGHT', minPeakHeight);The number of steps taken is the number of peaks:
numSteps = numel(pks)
numSteps = 15Finally, you can also identify these locations on your plot of acceleration magnitude data:
hold on; % Place a red marker on the locations that correspond to peaks. plot(t(locs), pks, 'r', 'Marker', 'v', 'LineStyle', 'none'); title('Counting Steps'); xlabel('Time (s)'); ylabel('Acceleration Magnitude, Gravity Removed (m/s^2)'); hold off;
m.AccelerationSensorEnabled = 0;
clear m;
Published with MATLAB® R2014b
]]>Mike is a computer graphics guru, well informed on the state of the art and kneedeep in the operational knowledge of How It All Works. He helped develop many of the features he’ll be describing, and he is a thoroughly entertaining writer. Beyond all this goodness, MATLAB graphics have taken a tremendous leap forward with the latest release, and Mike’s blog will give you an ideal perch to learn all about it. I recommend heading over there and seeing what he has to say.
]]>Marta is a senior application engineer at MathWorks UK, helping MATLAB users to design, develop and deploy productionquality code. Toshi is a Senior Marketing Manager at MathWorks.
by Marta Wilczkowiak and Toshi Takeuchi
In the previous post, you learned how we built an RFIDbased people tracking system from scratch in three months, a “technology stack” that spanned multiple hardware and software disciplines. We ended the post with how we managed to survive the moment of the truth: the first integration test, live at a conference with hundreds of delegates.
Now we had raw data to analyze to improve our system.
The Twist – Making Sense of RFID Data
Once we examined the real word data collected during the event more closely, we faced a new dilemma – we had originally assumed we could use signal strength to estimate distances. However, in reality there was so much fluctuation in signal strength that you could not make a confident estimate this way. This, incidentally, is why RFIDbased localization is a hot research topic today.
We were determined to work on an engineering solution, but it would take more time and effort. In the meantime, was there a way to get meaningful insights from the messy data at hand?
Web Analytics Meets Engineering
As we discussed the conference data internally at MathWorks, this issue came to the attention of Toshi Takeuchi, a USbased analyst. He immediately saw a similarity with the familiar problems he faced in analyzing massive volume of page views and clicks – always messy. What would happen if you applied web analytics techniques to this engineering problem? We embraced this suggestion and immediately shared our code and data with Toshi. Our collaboration was possible because we all spoke the same language – MATLAB.
Use of Proxy Metric
Nonphysical measurements are often used in web analytics. For example, the recommendations you see on online shopping or movie streaming websites are computed using distance measured in terms of similarity or affinity. Was there anything in the dataset, besides the signal strength, that we could turn into a proxy metric? The new approach Toshi came up with was to recast connection frequency as this virtual distance. If two tracking devices are connecting to one another more frequently than others in a given time interval, then we could think of them being “close” in the virtual space defined by connection frequency. This metric seemed far more reliable than signal strength. Our original goal was to understand the social interactions of the delegates, and physical distance metric was just a means to that end. If a virtual metric did the same job, then it seemed a good substitute.
The Result
With the support from Toshi, we were able to reconstruct the estimated movements of the delegates using the connection frequencies with stationary anchors such as demo stations (orange circles) and catering (green circles). Note that no personally identifiable information was collected or used for this analysis.
In the plot above, you see such an estimated movement for Delegate #120. This person appeared to have approached a nearby catering station first, then spent quite some time around BLOODHOUND Super Sonic Car in the middle, but in the afternoon stayed “closer” to software related demos than hardwarebased demos as measured in this “virtual” space.
We can use this information to map the frequencies of connections between delegates and demo stations, and visualize it as a heat map. The vertical bands with lighter red stripes indicates which demos had more connections than others, and horizontal stripes show to which demo stations each delegate connected frequently. You can see that people who connected with “RT” (a realtime testing exhibit) and “HW” (a hardware testing exhibit) had very little overlap with the people who connected with “Robo” (a robotics exhibit).
The Insight – the Three Factors of Delegate Behavior
Ultimately, the data is only as good as the insights you get from it. When we computed the estimated movements of the delegates with a technique called Principal Component Analysis, we could summarize the complex data by a mere three factors.
Furthermore, “Web” people and “Robo” people tend to share similar connection patterns as compared to “Conventional” people – what is common to both? They are both what we might call “convergence” applications where software and hardware are being integrated in an unconventional way.
While this might not be a novel insight, it was still very useful for us to know that the data backed up what we had been intuitively sensing all along. It was also interesting to see which delegates showed which particular inclinations on an individual basis (each dot in the plot identify a specific delegate).
Back to Reality – the Next Step
It is tempting to speculate if the virtual proximity on this plot correlates to actual shared interest among those delegates. If so, it is also interesting that two of the main factors turned out to be the web and robotics, which are examples of convergenceoriented applications that cross the traditional virtual/physical divide.
If there was an exit survey that asked the participants which demos they found interesting, we could use predictive analytics techniques to validate how well this virtual metric predicts the actual interest. Alternatively, we could use additional stationary anchors to determine which sessions delegates attended. If we placed more anchors on the floors in a more evenly spaced grid, including the areas populated by third party exhibitors, it would give more data points and more accurate measurements.
If we validated this correlation, it could open up opportunities for new ways for our delegates to find likeminded people.
The Journey Continues
Our journey is not over yet – we still have challenges with the system. But our project proved how such an interdisciplinary system could be built in three months using MATLAB, Simulink, and lots of team work.
]]>