Monday, May 20, 2024
HomeMatlabThe Biomassters Problem Starter Code » Scholar Lounge

The Biomassters Problem Starter Code » Scholar Lounge


Becoming a member of us right this moment is Grace Woolson, who joined the coed packages group in June 2022 to help knowledge science challenges and assist contributors use MATLAB to win! Grace will discuss a brand new knowledge science problem that launches right this moment November 1, 2022 with our companion DrivenData. Grace, over to you.. 

Getting Began with MATLAB

Good day all, my title is Grace Woolson and right this moment we shall be speaking a few new Information Science Problem so that you can take a look at your expertise! We at MathWorks®, in collaboration with DrivenData, are excited to carry you this problem. The target of this problem is to estimate the annual aboveground biomass (AGBM) in a given patch of Finland when supplied satellite tv for pc imagery of that patch. On this weblog we’re offering a primary starter instance in MATLAB®. On this code, I create a primary image-to-image regression mannequin and practice it to foretell peak AGBM for every pixel within the enter knowledge. Then I exploit this mannequin on take a look at knowledge and save the ends in the format required for the problem.
This could function primary beginning code that will help you to begin analyzing the information and work in direction of growing a extra environment friendly, optimized, and correct mannequin utilizing extra of the coaching knowledge obtainable. To request your complimentary MATLAB license and different getting began sources, go to the MathWorks Biomassters problem homepage.

If you wish to entry and run this code, you should use the ‘Run in your browser’ and ‘Obtain Dwell Script’ buttons within the backside proper nook of this web page.

Desk of Contents

The Information

Every chip_id represents one patch of land in a given 12 months. For every chip, you might be supplied roughly 24 satellite tv for pc pictures and 1 AGBM picture.

The satellite tv for pc imagery comes from two satellites known as Sentinel-1 (S1) and Sentinel-2 (S2), masking practically 13,000 patches of forest in Finland from 2017 to 2021. Every chip is 2,560 by 2,560 meters, and the pictures of those chips are 256 by 256 pixels, so every pixel represents a ten by 10 meter space of land inside the chip. You’re supplied a single picture from every satellite tv for pc for every calendar month. For S1, every picture is generated by taking the imply throughout all pictures acquired by S1 for the chip throughout that point. For S2, you might be supplied one of the best picture for every month.

The AGBM picture serves because the label for every chip in a given 12 months. Similar to the satellite tv for pc knowledge, the AGBM knowledge is supplied within the type of pictures that cowl 2,560 meter by 2,560 meter areas at 10 meter decision, which implies they’re 256 by 256 pixels in measurement. Every pixel within the satellite tv for pc imagery corresponds to a pixel within the AGBM picture with the identical chip ID.

For the competitors, you’ll use this knowledge to coach a mannequin that may predict this AGBM worth when supplied solely the satellite tv for pc imagery. To study extra in regards to the pictures, options, labels and submission metrics, head over to the problem’s Downside Description web page!

Preview the Information

To know the information that we are going to be working with, let’s have a look at a number of instance pictures for a particular chip_id. Within the sections under, the pictures correspond to chip_id 0a8b6998.

First, outline a variable that factors to the S3 bucket in order that we are able to entry the information. You could find this path within the ‘biomassters_download_instructions.txt’ file supplied on the knowledge obtain web page. Be sure that is the trail for your complete bucket, not any particular folder – it ought to begin with ‘s3://’. This shall be used all through the weblog

% Instance path, you will have to exchange this

s3Path = ‘s3://competitors/file/path/’;

Sentinel-1:

For every chip_id, we anticipate to see 12 pictures from Sentinel-1 with the naming conference {chip_id}_S1_{month}, the place month is a price between 00 and 11. There are circumstances the place there could also be lacking knowledge, which may lead to a number of of those pictures lacking.

Every Sentinel-1 picture has 4 bands, the place every band is one 256×256 matrix that comprises a particular measurement for the chip. Let’s visualize every band of one among these S1 pictures:

exampleS1Path = fullfile(s3Path, ‘train_features’, ‘0a8b6998_S1_00.tif’);

exampleS1 = imread(exampleS1Path);

% To visualise every layer, rescale the values of every pixel to be between 0 and 1

% Darker pixels point out decrease values, ligher pixels point out larger values

montage(rescale(exampleS1));

Sentinel-2:

Very like Sentinel-1, for every chip_id, we anticipate to see 12 pictures from Sentinel-2 with the naming conference {chip_id}_S2_{month}, the place month is a price between 00 and 11. There are circumstances the place there could also be lacking knowledge, which may lead to a number of of those pictures lacking.

Every Sentinel-2 picture has 11 bands, the place every band is one 256×256 matrix that comprises a particular measurement for the chip. Let’s visualize every band of one among these S2 pictures:

exampleS2Path = fullfile(s3Path, ‘train_features’, ‘0a8b6998_S2_00.tif’);

exampleS2 = imread(exampleS2Path);

% To visualise every layer, rescale the values of every pixel to be between 0 and 1

% Darker pixels point out decrease values, ligher pixels point out larger values

montage(rescale(exampleS2));

AGBM:

For every chip_id, there shall be one AGBM picture, with the naming conference {chip_id)_agbm.tif. This picture is a 256×256 matrix, the place every component is a measurement of aboveground biomass in tonnes for that pixel. For 0a8b6998, it appears like this:

exampleAGBMPath = fullfile(s3Path, ‘train_agbm’, ‘0a8b6998_agbm.tif’);

exampleAGBM = imread(exampleAGBMPath);

% Since we solely have to visualise one layer, we are able to use imshow

imshow(rescale(exampleAGBM))

Import and Preprocess the Information

Earlier than we are able to begin constructing a mannequin, we’ve to discover a method to get the information into the MATLAB Workspace. The information for this competitors is contained in a public Amazon S3™ bucket. The URL for this bucket shall be supplied after getting registered, so ensure you have signed up for the problem so you’ll be able to entry the information. In whole, all the imagery supplied takes up about 235GB of reminiscence, which is an excessive amount of to work with abruptly. In order that we are able to work with all the knowledge, I shall be profiting from MATLAB’s imageDatastore, which permits us to learn the information in a single chip_id at a time and can make it straightforward to coach a neural community afterward. If you wish to study extra about datastores, you’ll be able to discuss with the next sources:
  1. Getting Began with Datastore
  2. Datastores for Deep Studying
  3. Datastore for Picture Information

We use the s3Path variable we created earlier to create a agbmFolder, which factors particularly to the AGBM coaching knowledge.

agbmFolder = fullfile(s3Path, ‘train_agbm’);

We will then use agbmFolder to create a datastore for our enter (Satellite tv for pc imagery) and output (AGBM imagery) knowledge, named imInput and imOutput respectively. If you use an imageDatastore, you’ll be able to change the best way pictures from the desired listing are learn in to the MATLAB Workspace utilizing the ‘ReadFcn‘ choice. Since I wish to learn one AGBM picture however 24 satellite tv for pc pictures at a time, I outline a helper perform readTrainingSatelliteData that reads the filename of the AGBM file we are going to learn, which comprises the chip_id, and as an alternative reads in and preprocesses all corresponding satellite tv for pc pictures. Then I exploit the built-in splitEachLabel perform to divide the dataset into coaching, testing, and validation knowledge, in order that we are able to consider its efficiency throughout and after coaching. For this instance, I selected to make use of 95% of the information for coaching, 2.5% for validation and a couple of.5% for testing as a result of I wished to make use of a lot of the knowledge for coaching, however you’ll be able to mess around with these numbers.
  1. Extracts the chip_id from the filename of the AGBM picture that we are going to learn
  2. Reads in and orders all satellite tv for pc pictures that correspond to this chip_id
  3. Handles lacking knowledge. Since that is simply our first mannequin, I’ve determined to omit any pictures that include lacking knowledge.
  4. With the remaining knowledge, finds the common worth of every pixel for every band.
  5. Rescales the values to be between 0 and 1. Every satellite tv for pc has completely different models of measurement, which might make it tough for some algorithms to study from the information correctly. By normalizing the information scale, it could enable the neural community to study higher.

This ends in a single enter picture of measurement 256x256x15, the place every 256×256 matrix represents the common values for one band from S1 or S2 over the course of the 12 months. Since S1 has 4 bands and S2 has 11, this ends in 15 matrices. It is a very simplified method to symbolize the information, as it will solely be our beginning mannequin.

imInput = imageDatastore(agbmFolder, ‘ReadFcn’, @(filename)readTrainingSatelliteData(filename, s3Path), ‘LabelSource’, ‘foldernames’);

[inputTrain,inputVal,inputTest] = splitEachLabel(imInput,0.95,0.025);

For the output knowledge, we are going to use the default learn perform, as we solely have to learn one picture at a time and don’t have to do any preprocessing. Since we’re passing the identical listing to every datastore, we all know that they may learn the pictures in the identical chip_id order. As soon as once more, cut up the information into coaching, testing, and validation knowledge.

imOutput = imageDatastore(agbmFolder, ‘LabelSource’, ‘foldernames’);

[outputTrain,outputVal,outputTest] = splitEachLabel(imOutput,0.95,0.025);

As soon as the information has been preprocessed, we mix the enter and output units so they could be used with our neural community later.

dsTrain = mix(inputTrain, outputTrain);

dsVal = mix(inputVal, outputVal);

dsTest = mix(inputTest, outputTest);

The preview perform permits me to view the primary merchandise within the datastore, in order that we are able to validate that the inputs (the primary merchandise) and outputs (the second merchandise) are the sizes we expect:

sampleInputOutput = preview(dsTrain);

montage(rescale(sampleInputOutput{1})); % Enter Information

imshow(rescale(sampleInputOutput{2})) % Output Information

Create the Mannequin

Now that the information is imported and cleaned up, we are able to get began on truly growing a neural community! This problem is fascinating, in that the inputs and outputs are pictures. Usually, neural networks shall be used to take a picture as enter and output a category (picture classification) or perhaps a particular worth (image-to-one regression), as proven under:

[Fig 2.1: visualization of an image classification convolutional neural network]

On this problem, we’re tasked with outputting a brand new picture, so our community construction might want to look somewhat completely different:

[Fig 2.2: visualization of an image-to-image convolutional neural network]

Possibility 1: Create with the Deep Community Designer App

First, we’ve to decide on a community structure. For this weblog, I’ve determined to create a beginning community structure utilizing the ‘unetLayers‘ perform. This perform offers a community for semantic segmentation (an image-to-image classification downside), so it may be simply tailored for image-to-image regression. If you wish to study extra about different beginning architectures, try this documentation web page on Instance Deep Studying Networks Architectures.

For the reason that enter pictures shall be 256x256x15, this should even be the enter measurement of the community. For the opposite choices, I selected an arbitrary variety of lessons since we are going to change the output layers anyway, and a beginning depth of three.

lgraph = unetLayers([256 256 15], 2,‘encoderDepth’,3);

From right here, I can open the Deep Community Designer app and modify the mannequin interactively. I like this feature because it lets me visualize what the community appears like and it’s simpler to see that I’ve made the adjustments I need.

deepNetworkDesigner(lgraph)

When the app opens, it ought to look much like the picture under. If it’s zoomed in on sure layers, you’ll be able to zoom out to see the complete community by urgent the house bar.

[Fig 3.1: Deep Network Designer]

From right here, take away the final two layers, and alter the “Ultimate-ConvolutionLayer” in order that NumFilters is the same as 1. Some suggestions for utilizing the app for this step:

  • To zoom in or out, maintain CTRL and scroll up or down on the mouse
  • To delete a layer, click on on it and hit the Backspace button in your keyboard.
  • To switch a property of a layer, click on on the layer. This may open a menu on the appropriate you could work together with.
removingClassificationLayers.gif

[Fig 3.2: Removing and Modifying layers in Deep Network Designer]

It’s time so as to add within the regression layer:

addRegressionLayer.gif

[Fig 3.3: Adding a regression layer in Deep Network Designer]

Now, the mannequin is finished! It’s time to export it again into the MATLAB Workspace so it may be educated.

exportModel.gif

[Fig 3.4: Exporting a model from Deep Network Designer]

Be aware: it would routinely be exported as lgraph_1.

Possibility 2: Create Programmatically

First, we’ve to decide on a community structure. For this weblog, I’ve determined to create a beginning community structure utilizing the ‘unetLayers‘ perform. This perform offers a community for semantic segmentation (an image-to-image classification downside), so it may be simply tailored for image-to-image regression. If you wish to study extra about different beginning architectures, try this documentation web page on Instance Deep Studying Networks Architectures.

For the reason that enter pictures shall be 256x256x15, this should even be the enter measurement of the community. For the opposite choices, I selected an arbitrary variety of lessons since we are going to change the output layers anyway, and a beginning depth of three.

lgraph = unetLayers([256 256 15], 2,‘encoderDepth’,3);

Now we’ve to alter the ultimate few layers in order that the mannequin will carry out regression as an alternative of classification. I do that by eradicating the softmax and segmentation layers and changing them with a brand new convolution layer and a regression layer. The brand new convolution layer has a single filter in order that the ultimate picture output shall be a single layer, and the regression layer tells MATLAB how you can interpret the output and computes the mannequin’s half-mean-squared-error. To study extra about changing classification networks into regression networks, you’ll be able to discuss with this useful resource: Convert Classification Community into Regression Community.

lgraph = lgraph.removeLayers(‘Softmax-Layer’);

lgraph = lgraph.removeLayers(‘Segmentation-Layer’);

finalConvolutionLayer = convolution2dLayer([1, 1], 1, ‘Title’, ‘Ultimate-ConvolutionLayer-2D’);

lgraph = lgraph.replaceLayer(‘Ultimate-ConvolutionLayer’, finalConvolutionLayer);

lgraph = lgraph.addLayers(regressionLayer(‘title’,‘regressionLayer’));

lgraph_1 = lgraph.connectLayers(‘Ultimate-ConvolutionLayer-2D’,‘regressionLayer’);

As soon as the community is constructed, we are able to use the analyzeNetwork perform to test for errors and visualize the community. This may open in a brand new window.

analyzeNetwork(lgraph_1);

[Fig 4: Analysis and visualization of lgraph_1]

Set Coaching Preferences

As soon as all the layers are sorted out, it’s time to set the coaching choices. The trainingOptions perform lets us specify which solver will practice the mannequin and the way will probably be educated, and it’s vital to mess around with these choices when coaching a mannequin. There are infinite combos you’ll be able to select from, however these are those which have labored finest for me to this point:

choices = trainingOptions(‘adam’,

‘InitialLearnRate’, .0001,

‘MiniBatchSize’, 10,

‘ValidationData’, dsVal,

‘OutputNetwork’, ‘best-validation-loss’,

Be aware: if you wish to see analysis metrics and visualizations whereas the mannequin is being educated, set ‘Verbose‘ to true and set ‘Plots’ to ‘training-progress’.

Prepare the Mannequin

This step might be completed in just one line of code:

internet = trainNetwork(dsTrain,lgraph_1,choices)

Whereas that is the shortest part of code, it would take a number of hours to coach a deep studying mannequin. If in case you have entry to a supported GPU, I like to recommend utilizing it – the ‘trainNetwork’ perform will routinely make the most of a supported GPU if one is detected. The next useful resource comprises extra info on GPUs and Deep Studying: Run MATLAB Capabilities on a GPU

Consider the Mannequin on New Information

Now we’ve a completely educated mannequin that is able to make predictions on the take a look at knowledge! Please observe that the mannequin I created was educated on solely a subset of the coaching knowledge, so the outcomes you see on this part could look completely different than outcomes you get when you run the identical code.

To get output pictures from the take a look at set, use the predict perform.

ypred = predict(internet,dsTest);

The ensuing ypred is a 4-D matrix. The primary 3 dimensions symbolize every output picture of measurement 256x256x1, and the final dimension represents what number of of those pictures we’ve predicted. It’s laborious to inform how properly our mannequin carried out simply by taking a look at these numbers, so we are able to take a number of additional steps to judge the community.

Visualize

To entry the primary pair of satellite tv for pc and AGBM pictures from the take a look at set, use the preview perform.

testBatch = preview(dsTest);

This may enable us to visualise a pattern enter picture, the precise AGBM, and the related predicted AGBM from the community to get a way of how properly the community is performing.

predicted = ypred(:,:,:,idx);

title(‘Anticipated vs Precise’);

Whereas the pictures aren’t an identical, we are able to positively see related shapes and shading! For the reason that output knowledge is a measure of AGBM and never a illustration of coloration, nonetheless, the values for every pixel aren’t between 0 and 1, so something above 1 is being displayed as a white pixel. Let’s use the rescale perform as we did earlier than to get a greater illustration of the pictures so we are able to see extra particulars and make sure that these larger values are nonetheless correct.

rescaledPred = rescale(predicted);

rescaledRef = rescale(ref);

montage({rescaledRef,rescaledPred})

title(‘Anticipated vs Precise’);

Now that we are able to see far more element, we are able to affirm that the community does job of matching the final shapes and countours of the anticipated output. We will additionally see, nonetheless, that the picture produced by the community is mostly brighter than the anticipated output, indicating that lots of the values are larger than they need to be.

Calculate Efficiency Metrics

For the competitors, your closing rating would be the common root-mean-square error (RMSE) of every picture submitted. RMSE might be represented by the next components:

– $E =$sqrt{

E=1n∑i=1n|Ai-Fi|2

For a forecast array F and precise array A made up of n scalar observations.

Given this components, we are able to calculate the RMSE for a given prediction with the next line of code:

rmse = sqrt(imply((ref(:) – predicted(:)).^2))

The decrease the RMSE, the higher the mannequin is. As you’ll be able to see, there’s nonetheless loads of room for enchancment of this mannequin.

Attainable Subsequent Steps for Enchancment

Remember that this community is probably not one of the best as a result of my important aim with this weblog was to point out how you can use the imageDatastore and how you can arrange a community. However I do have a community that basically tries, and there are many methods to maintain making an attempt issues out and bettering this community:

  • Create a mannequin that accepts extra info! Proper now we lose lots of info from the uncooked coaching knowledge, so discovering a means to make use of extra of it may lead to a extra knowledgeable mannequin.
  • As a substitute of ignoring knowledge, discover methods to fill it in. Do you make a replica of earlier satellite tv for pc pictures when one is lacking? Fill it in with a median? There are many methods to strategy this.
  • Incorporate the cloud cowl layer.
  • Check out completely different mannequin buildings! One different instance construction might be discovered right here.
  • Experiment with coaching choices.
  • Strive completely different distributions of coaching, testing, and validation knowledge.

Predict on Take a look at Information & Export Outcomes

After you have a mannequin that you’re pleased with, you will have to make use of it to make predictions on the take a look at knowledge. To do that, we’ll have to first import and preprocess the information as we did above, then use the predict perform to make predictions. Since we don’t have an ‘agbm’ folder to make use of as reference this time, the best way we preprocess the information should look somewhat completely different.

To start out, we are going to use the ‘features_metadata’ file supplied to get an inventory of all take a look at chip_ids.

featuresMetadataLocation = fullfile(s3Path, ‘features_metadata.csv’)

featuresMetadata = readtable(featuresMetadataLocation, ‘ReadVariableNames’,true);

testFeatures = featuresMetadata(strcmp(featuresMetadata.cut up, ‘take a look at’), :);

testChips = testFeatures.chip_id;

[~, uniqueIdx, ~] = distinctive(testChips);

uniqueTestChips = testChips(uniqueIdx, :);

Then I make a brand new folder that can maintain all the predictions and a variable that factors to this folder:

if ~exist(‘test_agbm’, ‘dir’)

% Embody full file path, it is a placeholder – this could NOT be on the S3 bucket

outputFolder = ‘C:DrivenData..’;

Then, iterate by means of every chip_id and format the enter knowledge to match the anticipated format of our community (256x256x15), make predictions on the enter knowledge, then export every prediction as a TIFF file utilizing the Tiff and write capabilities. For the competitors, the anticipated names of those TIFF recordsdata is ‘{chip_id}_agbm.tif’.

for chipIDNum = 1:size(uniqueTestChips)

chip_id = uniqueTestChips{chipIDNum};

inputImage = readTestingSatelliteData(chip_id, s3Path);

pred = predict(internet, inputImage);

% Arrange TIF file and export prediction

filename = [outputFolder, chip_id, ‘_agbm.tif’];

t = Tiff(filename, ‘w’);

% Must set tag data of Tiff file

tagstruct.ImageLength = 256;

tagstruct.ImageWidth = 256;

tagstruct.Photometric = Tiff.Photometric.MinIsBlack;

tagstruct.BitsPerSample = 32;

tagstruct.SamplesPerPixel = 1;

tagstruct.SampleFormat = Tiff.SampleFormat.IEEEFP;

tagstruct.PlanarConfiguration = Tiff.PlanarConfiguration.Chunky;

tagstruct.Compression = Tiff.Compression.None;

tagstruct.Software program = ‘MATLAB’;

And similar to that, you’ve exported your predictions! To create a TAR file of those predictions, we are able to merely use the built-in tar perform.

tar(‘test_agbm.tar’, ‘test_agbm’);

The ensuing ‘test_agbm.tar’ is what you’ll submit for the problem.

Thanks for following together with this starter code! We’re excited to see how you’ll construct upon it and create fashions which can be uniquely yours. Be happy to succeed in out to us within the DrivenData discussion board or e mail us at studentcompetitions@mathworks.com when you’ve got any additional questions. Good luck!

Extra Sources

If you wish to study extra about deep studying with MATLAB, try these sources!

Helper Capabilities

perform avgImS1S2 = readTrainingSatelliteData(outputFilename, s3Path)

outputFilenameParts = cut up(outputFilename, [“_”, “”]);

chip_id = outputFilenameParts{end-1};

inputDir = fullfile(s3Path, ‘train_features’);

correspondingFiles = dir([inputDir, chip_id, ‘*.tif’]);

% The satellite tv for pc pictures vary from 00-11, so preallocate a cell arrray

% Compile and order all knowledge

for fileIdx = 1:size(correspondingFiles)

filename = correspondingFiles(fileIdx).title;

filenameParts = cut up(filename, [“_”, “”, “.”]);

satellite tv for pc = filenameParts{end-2};

fullfilename = strcat(inputDir, filename);

im = imread(fullfilename);

% Plus one as a result of matlab begins at 1

idx = str2double(filenameParts{end-1}) + 1;

% Add all enter pictures to ordered cell array

elseif satellite tv for pc == ‘S2’

if measurement(s1Data{imgNum}, 3) ~= 4

elseif measurement(s2Data{imgNum}, 3) ~= 11

if ismember(-9999, s1Data{imgNum})

elseif ismember(-9999, s2Data{imgNum})

% Calculate common S1 knowledge

totalImS1 = zeros(256, 256, 4);

for imgNum1 = 1:size(s1Data)

currIm = s1Data{imgNum1};

totalImS1 = totalImS1 + currIm;

avgImS1 = totalImS1 ./ size(s1Data);

% Calculate common S2 knowledge

totalImS2 = zeros(256, 256, 11);

for imgNum2 = 1:size(s2Data)

currIm = s2Data{imgNum2};

totalImS2 = totalImS2 + currIm;

avgImS2 = totalImS2 ./ size(s2Data);

% Mix all bands into one 15 band picture

avgImS1S2 = cat(3, avgImS1, avgImS2);

% Rescale so the values are between 0 and 1

avgImS1S2 = rescale(avgImS1S2);

perform avgImS1S2 = readTestingSatelliteData(chip_id, s3Path)

inputDir = fullfile(s3Path, ‘test_features’);

correspondingFiles = dir([inputDir, chip_id, ‘*.tif’]);

% The satellite tv for pc pictures vary from 00-11, so preallocate a cell arrray

% Compile and order all knowledge

for fileIdx = 1:size(correspondingFiles)

filename = correspondingFiles(fileIdx).title;

filenameParts = cut up(filename, [“_”, “”, “.”]);

satellite tv for pc = filenameParts{end-2};

fullfilename = strcat(inputDir, filename);

im = imread(fullfilename);

% Plus one as a result of matlab begins at 1

idx = str2double(filenameParts{end-1}) + 1;

% Add all enter pictures to ordered cell array

elseif satellite tv for pc == ‘S2’

if measurement(s1Data{imgNum}, 3) ~= 4

elseif measurement(s2Data{imgNum}, 3) ~= 11

if ismember(-9999, s1Data{imgNum})

elseif ismember(-9999, s2Data{imgNum})

% Calculate common S1 knowledge

totalImS1 = zeros(256, 256, 4);

for imgNum1 = 1:size(s1Data)

currIm = s1Data{imgNum1};

totalImS1 = totalImS1 + currIm;

avgImS1 = totalImS1 ./ size(s1Data);

% Calculate common S2 knowledge

totalImS2 = zeros(256, 256, 11);

for imgNum2 = 1:size(s2Data)

currIm = s2Data{imgNum2};

totalImS2 = totalImS2 + currIm;

avgImS2 = totalImS2 ./ size(s2Data);

% Mix all bands into one 15 band picture

avgImS1S2 = cat(3, avgImS1, avgImS2);

% Rescale so the values are between 0 and 1

avgImS1S2 = rescale(avgImS1S2);

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments