Skip navigation
All Places > Canvas Developers > Blog
1 2 3 Previous Next

Canvas Developers

83 posts

This blog post was authored by Xander Moffatt who is a Software Engineer on our Interoperability Team.

 

tl;dr
Safari blocks all 3rd-party cookies by default, which breaks LTI tools that rely on setting cookies when launched in an iframe. Instead, it exposes a new API for getting user-permitted access to set cookies from an iframe, which works though requires jumping through some fun hoops.

Background

Safari has been working towards this step for a few years now, since the introduction of the Storage Access API and Intelligent Tracking Prevention. These features were introduced to limit cross-site tracking that users never agreed to, and to preserve their privacy. These limitations have also grown more strict over the years, with most third-party cookies already being blocked by the time of the newest release, 13.1.

 

Before this release, third-party cookies were allowed to be set once the domain has set a first-party cookie, which occurs when the tool is launched in the parent browser window instead of a child iframe. Canvas implemented this change to launch the tool in the parent window, let it set a cookie, and provide a redirect url so that the tool is launched again in an iframe, with the ability to set third-party cookies.

 

This release makes Safari the first mainstream browser to fully block third-party cookies by default, though Chrome aims to ship the same features by 2022. The Storage Access API should also be made standard, providing a known way for LTI tools to still be functional. Note that this behavior can be turned off, by disabling the Preferences > Privacy > Prevent cross-site tracking checkbox.

 

When this behavior is enabled, the current behavior of an LTI tool launch in Canvas is to get stuck in an infinite loop, since Storage Access hasn’t been granted and so the cookie can never be set.

Storage Access API

There are only two methods in the Storage Access API, but they are more complex than they look. document.hasStorageAccess asynchronously resolves to a boolean indicating whether the iframe already has access to set its own cookies. In practice, this is almost never true until a call to the next method, document.requestStorageAccess. This method also asynchronously resolves to a boolean indicating whether the iframe now has access. The hoops that require jumping through come with this method.

  • this method must be called upon a user gesture (like tap/click). this means the user must click a button and the listener for this button must directly call requestStorageAccess. any calls that aren’t inside a listener will immediately return false.
  • this method won’t return true for a domain in an iframe unless the user has interacted with the domain in a first-party context. This is to make sure that the user knows and trusts this domain. Interaction in this case means another user gesture like a tap or click.
  • this method will return true if the user has had storage access granted in the last 24 hours.
  • once this has been called from a user gesture, the user has interacted in a first-party context, and then the request is sent again from a third-party context, then Safari will prompt the user using a browser dialog box to allow storage access. Once the user clicks Allow, then this method will return true and the tool can finally set the cookies it needs to authenticate its user.

Solution

There are a couple of ways to approach this situation from a Canvas-launching-LTI tools standpoint, and both of them are on the tool side, as opposed to the Canvas side. Canvas continues its behavior of providing a redirect url when the tool requests a full window launch, but the tool has some decisions to make.


If your LTI tool can handle being stateless and not setting cookies (ie it doesn’t require logging in, or the login process is fast so can be done on every launch), do it. Move any non-login cookies to window.ENV or something, let the user login if needed, and just plan on that whole flow happening on every launch.


If your LTI tool requires storing state in cookies and keeping the user logged in, there is a slightly more complex process to work with an in-line Canvas launch. Note that the Storage Access API happens in Javascript, but most LTI tools want to set httpOnly cookies from the server for sensitive cookies like a login token, so once the tool has Storage Access, a final redirect back to the server to set cookies and render the UI will be needed.

  1. When the tool launches, use document.hasStorageAccess to check if the tool already has Storage Access. This will most likely never be true, but if it is, redirect to the tool server to set cookies and render the UI.
  2. Request Storage Access using a user button click that calls document.requestStorageAccess. If the user has granted Storage Access within the last 24 hours, this will be granted. If granted, redirect to the tool server to set cookies and render the UI.
  3. If the request fails, then it’s time to get user interaction in a first-party context. Send a postMessage to Canvas requesting a full window launch, providing the tool’s normal launch url.
  4. Once that custom postMessage has been sent, Canvas will launch the tool again, in a full window. Canvas will send a platform_redirect_url in the request parameters, which is how you can tell it’s a full window launch. Get user interaction by having them click a button, and on that click redirect to the url Canvas supplied.
  5. Canvas will redirect to that url, which means another tool launch in an iframe. The tool will go through steps 1 and 2 again, and this time Safari should prompt the user to grant access. Once that happens, the tool has Storage Access and should redirect to the tool server to set cookies and render the UI.

Efforts are being made to encapsulate this behavior in some sort of gem/module, but since it touches both server- and client-side code it might be hard.

 

Though this method requires anywhere from 1-3 user button clicks before the app loads, it does provide a non-hacky way of interacting with cookies in Safari.

note that these are snippets that don’t have all variables and dependencies added. They are just for reference!
  • checking for storage access
document.addEventListener("DOMContentLoaded", () => {
  if (document.hasStorageAccess) {
    document
      .hasStorageAccess()
      .then((hasStorageAccess) => {
        if (hasStorageAccess) {
          redirectToSetCookies();
        }
      })
      .catch((err) => console.error(err));
  } else {
    redirectToSetCookies();
  }

  ReactDOM.render(
    <RequestStorageAccess />,
    document.body.appendChild(document.createElement("div"))
  );
});
  • requesting storage access
const requestStorageAccess = () => {
  document
    .requestStorageAccess()
    .then(() => redirectToSetCookies())
    .catch(() => requestFullWindowLaunch());
};

const buttonText = "Continue to LTI Tool";
const promptText =
"Safari requires your interaction with this tool inside Canvas to keep you logged in.\n" +
"A dialog may appear asking you to allow this tool to use cookies while browsing Canvas.\n" +
"For the best experience, click Allow.";

return (
<InteractionPrompt
  action={requestStorageAccess}
  buttonText={buttonText}
  promptText={promptText}
  size="medium"
/>
);
  • requesting a full window launch
const requestFullWindowLaunch = () => {
    window.parent.postMessage(
      {
        messageType: "requestFullWindowLaunch",
        data: FULL_WINDOW_LAUNCH_URL,
      },
      "*"
    );
  };
  • interact with user in a first-party context
const SafariLaunch = () => {
  const redirect = () => {
    window.location.replace(PLATFORM_REDIRECT_URL);
  };
  const buttonText = "Continue to LTI Tool";
  const promptText =
    "Safari requires your interaction with this tool outside of Canvas before continuing.";

  return (
    <InteractionPrompt
      action={redirect}
      buttonText={buttonText}
      promptText={promptText}
    />
  );
};

document.addEventListener("DOMContentLoaded", () => {
  ReactDOM.render(
    <SafariLaunch />,
    document.body.appendChild(document.createElement("div"))
  );
});
  • handle different types of launches, including full window, Safari, and non-Safari
# Safari launch: Full-window launch, solely for first-party user interaction.
# Redirect to Canvas for inline relaunch.
if safari_redirect_required?
  @platform_redirect_url = params[:platform_redirect_url]
  return render('safari/full_window_launch')
end

if browser.safari?
  # Safari launch: request Storage Access, then redirect to
  # :relaunch_after_storage_access_request with pertinent cookie info
  # If Storage Access request fails, request a full window launch instead.
  @id_token = id_token
  @state = state
  return render('safari/request_storage_access')
end

# Non-Safari launch: set cookies and render app launch

Resources

https://webkit.org/blog/7675/intelligent-tracking-prevention/
https://webkit.org/blog/8124/introducing-storage-access-api/
https://webkit.org/blog/10218/full-third-party-cookie-blocking-and-more/

After the fun yesterday of using the dashboard cards to make the entry of a course ID easier (Using the dashboard information via the API in programs ). I decided to do a similar thing for user IDs, so rather than have to enter a Canvas user_id for each program - why not be more flexible with a "person_id". The result is a program that can take any of several forms of user identification in and get a Canvas user-id for this user. It also shows how to use some of the SIS IDs at Object IDs, SIS IDs, and special IDs - Canvas LMS REST API Documentation.

    # check for numeric string, in which case this a Canvas user_id
    if person_id.isdigit():
        info=user_info(person_id)
    elif person_id.count('-') == 4:     # a sis_integration_id
        info=user_info('sis_integration_id:'+person_id)
        integration_id=person_id        # since we have the ID, save it for later
    elif person_id.find('@') > 1:       # if an e-mail address/login ID
        info=user_info('sis_login_id:'+person_id)
    else:
        # assume it is a local university ID
        info=user_info('sis_user_id:'+person_id)

    # extract Canvas user-id from the user's info:
    user_id=info['id']
    print("sortable name={}".format(info['sortable_name']))
    print("Canvas user_id={}".format(user_id))

    # try to get the user's integration_id via their profile
    user_profile=user_profile_info(user_id)
    integration_id=user_profile.get('integration_id', None)
    login_id=user_profile.get('login_id', None)
    if login_id:
        print("login_id={}".format(login_id))

The routines user_info() and user_profile_info() just call the GET /api/v1/users/:id and GET /api/v1/users/:id/profile APIs respectively.

 

This has some relation to the question I ask in Searching for users and my several iterations of addressing the question.

 

Also, as noted in my reply to What is the use case for "integration_ids"? in order to get the user's integration_id, i.e., the sis_integration_id) - I had to use the GET /api/v1/courses/:course_id/enrollments API. I'm not completely sure why all of these different IDs for a user are not returned in the User structure returned by GET /api/v1/users/:id .

 

In most of my programs that use the Canvas API, I take in the course_id from the command line in numeric form (i.e., the string: 11). One of my colleagues said that he does not like to remember the course numbers but would rather use the course code or a nickname. So this motivated me to see if one could use the dashboard information for this.

 

The first thing I discovered was that it seems the dashboard API is not documented - or perhaps I just could not find it. So I watched a Canvas session in the browser and found the API is:

GET /api/v1/dashboard/dashboard_cards

 

So I made a test program to get all of my cards and make a spreadsheet of them, see my-dashboard.py at GitHub - gqmaguirejr/Canvas-tools: Some tools for use with the Canvas LMS. 

 

After looking at the cards and their information it was really easy to see how to use this information to make it so that you can convert the "course_id" that is actually a nickname, short name, or original name (or a prefix or substring of it).

def course_id_from_assetString(card):
    global Verbose_Flag

    course_id=card['assetString']
    if len(course_id) > 7:
        if course_id.startswith('course_'):
            course_id=course_id.replace('course_', "", 1)
            if Verbose_Flag:
                print("course_id_from_assetString:: course_id={}".format(course_id))
            return course_id
    else:
        print("Error missing assetString for card {}".format(card))
        return None

# check if the course_id is all digits, matches course code, or matches a short_name
def process_course_id_from_commandLine(course_id):
    if not course_id.isdigit():
        cards=list_dashboard_cards()
        for c in cards:
            # look to see if the string is a course_code
            if course_id == c['courseCode']:
                course_id=course_id_from_assetString(c)
                break
            # check for matched against shortName
            if course_id == c['shortName']:
                course_id=course_id_from_assetString(c)
                break
            # look for the string at start of the shortName
            if c['shortName'].startswith(course_id) > 0:
                course_id=course_id_from_assetString(c)
                print("picked the course {} based on the starting match".format(c['shortName']))
                break
            # look for the substring in the shortName
            if c['shortName'].find(course_id) > 0:
                course_id=course_id_from_assetString(c)
                print("picked the course {} based on partial match".format(c['shortName']))
                break

            # check for matched against originalName
            if course_id == c['originalName']:
                course_id=course_id_from_assetString(c)
                break
            # look for the string at start of the shortName
            if c['originalName'].startswith(course_id) > 0:
                course_id=course_id_from_assetString(c)
                print("picked the course {} based on the starting match".format(c['shortName']))
                break
            # look for the substring in the shortName
            if c['originalName'].find(course_id) > 0:
                course_id=course_id_from_assetString(c)
                print("picked the course {} based on partial match".format(c['shortName']))
                break

        print("processing course: {0} with course_id={1}".format(c['originalName'], course_id))
    return course_id

Now, hopefully, there will be a happy user as in the main program to process the first argument of the command line as a course_id you simply say:

 

    course_id=process_course_id_from_commandLine(remainder[0])
    if not course_id:
        print("Unable to recognize a course_id, course code, or short name for a course in {}".format(remainder[0]))
        return

 

Of course, there are probably some gotchas - but it should work better than having to look up the numeric values.

You will require the following variables set up accordingly. Values such as $USERID, $COURSEID and $ASSIGNMENTID can be obtained from the API or looking at link URLs in the Canvas web interface.

 

>> $token = '<TOKEN>'
>> $headers = @{"Authorization"="Bearer "+$token}
>> $userId = 123
>> $asUserId = 456
>> $courseId = 789
>> $assignmentId = 101112
>> $fileName = 'submission.bmp'
>> $filePath = 'c:\submission.bmp'
>> $fileContentType = 'image/bmp' 

 

Step 1 - Initiate the assignment submission file upload process.

Note that the AS_USER_ID parameter is attached here to the URI to enable masquerading (otherwise cannot upload

a file to another user's account.)

 

>> $response = Invoke-RestMethod `
   -URI   "https://<HOST_NAME>:443/api/v1/courses/$courseId/assignments/$assignmentId/submissions/$userId/files?as_user_id=$asUserId" `
   -headers $headers `
   -method POST 

 

We obtain an upload URI from the $RESPONSE object.


>> $uploadUri = $response.upload_url

 

This upload URI has a life span of 30 minutes, and cannot be used after timeout. The response content contains a list of parameters called UPLOAD_PARAMS which should be included in the POST submission body along with the file data when the file is subsequently uploaded. For our school, these parameters are FILENAME and CONTENT_TYPE.

 

Step 2 - Construct a hashmap which includes the file to be uploaded, along with the file parameters specified in the response above. This hashmap is passed to the Invoke-RestMethod powershell command which sends the file as part of a form submission.


>> $form = @{

   filename = $fileName
   content_type = $fileContentType
   file = Get-Item -Path $filePath
}


>> $response = Invoke-RestMethod `
   -URI $uploadUri `
   -Method POST `
   -Form $form

>> Write-Host "$($response.size) bytes uploaded."

 

 

 

Step 3 - Associate the uploaded file with an assignment submission. The $RESPONSE object returned by the previous API call conveniently contains the ID of the file which was just uploaded. We create a $BODY hashmap which is then submitted as POST parameters to associate the assignment submission with the uploaded file. 

 

Note the braces "[]" which must be included after the "[file_ids]" parameter.


>> $body = @{

   'submission[submission_type]'='online_upload'
   'submission[file_ids][]'=$response.id
}

>> $response = invoke-restmethod `
   -uri "https://<HOSTNAME>:443/api/v1/courses/$courseId/assignments/$assignmentId/submissions" `
   -headers $headers `
   -method POST `
   -body $body `
   -ContentType "multipart/form-data"

 

If no errors occur (these can be handled with TRY/CATCH) then the submission process has completed successfully. 

The $RESPONSE object returned by the previous call does contain values which might also be tested to determine if the submission has completed successfully (e.g. workflow_state='submitted') but I haven't yet encountered a scenario where a submission would fail without throwing a catchable error. 

I've developed a tool I wanted to share here.  I teach multiple sections of a course with up to 72 students per section.  I typically merge all sections of my course into a single canvas site.  This works well for most things, but grouping 360 students into 60 teams manually is a nightmare.  The GUI for team management is an atrocious mess.

 

I've written a Google Apps Script in JavaScript that uses a Google Spreadsheet as the GUI.  With this tool, you can download your canvas roster, and then you can upload all teams automatically using another sheet.  

 

I wrote the tool to work with CATME team formation, specifically.  But, so long as the 'CATME Import' sheet contains the student email and team name, it should work fine for manual team formation.

 

This is my first stab at API coding, and so there are bound to be lots of errors and bugs.  For one, I don't have the OAuth2 worked out, so this version uses a temporary token that you get from your Canvas page.  

 

I've put a version of the code up on GitHub - GitHub - dagray3/canvas_api_scripts: collection of Google Apps Scripts in JavaScript for working with Canvas API 

 

I don't know how to share the companion google sheet with others.  But, I think it's a good, rough beta that might be of use to someone.  Hit me up with a DM if you have questions or comments about the code.  Otherwise, be awesome.

In a Canvas course, you can quickly check the number of missing assignments for single students relatively quickly. You can also message groups of students missing specific assignments from the analytics page (or the gradebook). What you can't do is get a list of all students in a course and their missing assignments in a CSV for quick analysis.

In my never-ending exploration of the Canvas API, I've got a Python script that creates a missing assignments report for a course, broken down by section.

 

Sidebar...

I have my own specific thoughts about using the "missing" flag to communicate with students about work. The bigger picture is that while we're distance learning, it's helpful to be able to get a birds-eye view of the entire course in terms of assignment submission. We also have enlisted building principals to help check in on progress and having this report available is helpful for their lookup purposes.

 

The Script

from canvasapi import Canvas # pip install canvasapi
import csv
import concurrent.futures
from functools import partial


KEY = '' # Your Canvas API key
URL = '' # Your Canvas API URL
COURSE = '' # Your course ID

canvas = Canvas(URL, KEY)
course = canvas.get_course(COURSE)
assignments = len(list(course.get_assignments()))
writer = csv.writer(open('report.csv', 'w'))

def main():
    sections = course.get_sections()

    writer.writerow(['Name', 'Building', 'Last Activity', 'Complete', 'Missing'])

    for section in sections:
        enrollments = section.get_enrollments(state="active", type="StudentEnrollment")
       
        # Play with the number of workers.
        with concurrent.futures.ThreadPoolExecutor(max_workers=3) as executor:
           
            data = []
            job = partial(process_user, section=section)

            results = [executor.submit(job, enrollment) for enrollment in enrollments]
       
            for f in concurrent.futures.as_completed(results):
                data.append(f.result())
                print(f'Processed {len(data)} in {len(list(enrollments))} at {section}')
               
        writer.writerows(data)

def process_user(enrollment, section):
    missing = get_user_missing(section, enrollment.user['id'])
    return [
        enrollment.user['sortable_name'],
        section.name,
        enrollment.last_activity_at,
        len(missing), ', '.join(missing)
    ]

def get_user_missing(section, user_id):
    submissions = section.get_multiple_submissions(student_ids=[user_id],
                                                   include=["assignment", "submission_history"],
                                                   workflow_state="unsubmitted")

    missing_list = [item.assignment['name'] for item in submissions \
        if item.workflow_state == "unsubmitted" and item.excused is not True]

    return missing_list


if __name__ == "__main__":
    main()

 

How does it work?

The script uses UCF's canvasapi library to handle all of the endpoints. Make sure to pip install before you try to run the script. The Canvas object makes it easy to pass course and section references around for processing. Because each student has to be individually looked up, it uses multiple threads to speed up. There isn't much compute, just API calls and data wrangling, so multithreading worked better than multiprocessing.

 

For each section, the script calls for each students' submissions, looking for workflow_state="unsubmitted" specifically to handle filtering on the Canvas servers. From this filtered list, it creates a final list by checking the submission history and any excused flags. A list is then returned to the main worker and the section is written as a whole to keep the processes thread-safe.

 

When the script is finished, you'll have a CSV report on your filesystem (in the same directory as the script itself) that you can use.

 

Improvements

Currently, missing assignments are joined as a single string in the final cell, so those could be broken out into individual columns. I found that the resulting sheet is nicer when the number of columns is consistent, but there could be some additional processing added to sort assignments by name to keep order similar.

 

Canvas is also implementing GraphQL endpoints so you can request specific bits of data. The REST endpoints are helpful, but you get a lot of data back. Cleaning up the number of bytes of return data will also help it run faster.

While schools are closed, we've moved much of our long term staff development material into Canvas. We have one long-running course with all staff split into site-based sections that has worked as a model for others. We needed a way to essentially duplicate the template course enrollments into new training courses.

 

Ignorance is bliss (sometimes) and I didn't know of a good way to make this happen. I looked at some of the provisioning reports, but I couldn't select a single course to run a report on. So, I reached for Python and the UCF Open canvasapi library to make it happen.

 

At the end of this process, I ended with a brand new course, populated with teachers enrolled in their specific sections. I was also able to disable the new registration email and set their course status to active by default.

 

from config import PROD_KEY, PROD_URL
from canvasapi import Canvas # pip install canvasapi

# Define your course IDs. Be careful!
template_course_id = ''
new_course_id = ''

canvas = Canvas(PROD_URL, PROD_KEY)

template_course = canvas.get_course(template_course_id)
new_course = canvas.get_course(new_course_id)

# Open the template course section by section
template_sections = template_course.get_sections()

# Get any sections that may already exist in the new course
new_sections = [section.name for section in new_course.get_sections()]

# This whole loop could be improved a little.
for section in template_sections:
    # Get all the section enrollments
    enrollments = section.get_enrollments()

    # If it's a brand new course, this should always be false
    if not section.name in new_sections:
        print(f'Creating section {section.name}')
        new_sections.append(section.name)
        course_section = {
            "name": section.name,
        }
        new_section = new_course.create_course_section(course_section=course_section)
       
        count = 0 # start counting enrollments for quick quality checks
       
        for enrollment in enrollments:
            student = enrollment.user['id']
            print(f'Enrolling {enrollment.user["name"]}')
            count += 1
            args = {
                "course_section_id": new_section.id,
                "notify": False,
                "enrollment_state": "active"
            }
            try:
                new_course.enroll_user(student, "StudentEnrollment", enrollment=args)
            except Exception as e:
                print(e)
        print(f'Enrolled {count} users in {new_section.name}')

It's definitely brute force, but it saved us from having to copy and paste nearly 1,300 users into the new course by hand from a spreadsheet.

 

Why force enroll at all?

I think this highlights one of the barriers for really taking Canvas to the next level for staff support. There is no good way to enroll non-student users in courses for required development. In our case, it's to fulfill a required training for staff and using Canvas makes sense as a lot is done through application and reflection.

 

The public course index in Canvas could be used, but without a great way to expose the course to instructional staff only (I know we could use some JavaScript and edit the template, but that's just another thing to manage) it could lead to students joining courses either by accident or maliciously.

 

We've also toyed around with making a custom self-signup process on an internal website where staff are forwarded directly to the enroll page, but it's another system to manage and another site for teachers to use. The most hands-off approach for all involved is to do something like this in the background as needed to get people where they need to be effectively and efficiently.

Gideon Williams

ChatBot

Posted by Gideon Williams Champion May 8, 2020

So I thought I would try to make a ChatBot for Canvas to add to our staff EdTech Help canvas course.

 

I had come across a number of posts and ideas mentioning this a way back - this one in particular from Sonya Corcoran -  Microsoft's QnA Maker = Canvas FAQ ai  and also AI chatbot which answers basic student questions

 

Spent a couple of hours trying to get it set up. Googled ChatBot. Got some advice about Azure and QnA Maker. Set up a free portal. Followed a few online help guides. Actually, it was not a difficult as I first thought..

 

Bit of trial and error, made a few mistakes along the way, struggled with some of the Tech but I've actually made one.

The Chat bot is embedded on a Canvas page. I used the Redirect tool used to create an entry on the Navigation menu to take you directly to the page.

 

Of course, this is the easy bit. The (fun part?) of the challenge is now to "program'' it and get it to be useful...

 

 

Just the start of a post. More to be added soon but please get in touch or ask questions below or share ideas and thoughts.....

 

Today's work (6th May)

Customising the standard Hello and Welcome! message to:

Thanks to - botframework - How to customize the "Hello and welcome" default response message in Microsoft Azure Bot QnA framework - … 

 

Customising the Default message - No QnA Maker answers were found to:

Thanks to - QnA Maker | How to customize the "No good match in FAQ" default response message - YouTube 

 

Adding some images:

 

Thanks to - How to Add Images to QnA Maker Answers in Markdown 

 

Learning how to use Markdown to add some formatting to your responses:

Thanks to - Markdown Tutorial - Introduction has a great hands-on tutorial!

 

Today's work (7th May)

There is an interesting option that allows you to import Word/PDF files to create Q&A responses. The format they suggest needs to be quite a formal design with the use of Headings for certain features eg

I tried this with a guide I had written for Learning Apps - the results were NOT GREAT! I had (secretly) hoped that I would magically create amazingly engaging FAQs with pictures and formats - nope. None of the pictures added and as such the step by step guide makes little sense.

 

To be fair, a little down the Microsoft Help guide - Import document format guidelines - QnA Maker - Azure Cognitive Services | Microsoft Docs it does suggest the sort of document that would work best (basically a Word based FAQ doc)

I have not tried this with the hyperlinks in place but if it does this then at least this is a step in the right direction..

 

Instead, I have been making use of the Markdown script to copy in links to Canvas pages in our EdTech platform:

 

My Help Guides are made in Word and I would ordinarily use the Office365 integration to link to this. Instead I PDF'ed the file and put it into the Canvas course.

 

What I am learning very quickly is how best to create a ChatBot flowchart that allows different approaches for users. Wonderfully enough, it has drawn me back to this superb blog post from Bobby Pedersen Horse Before the Cart. Purpose first, Canvas second. and wonderful comments from Kelley L. Meeusen

 

It is easier to create a chatbot that responds to a request by providing a link to a Help Guide and/or some examples.Of course the real challenge would be developing a framework not based on how you can get help but:

What would you like to be able to do?

 

Oh, before I forget, CanvasBot now has face to go with the name:

This post was originally published on my own blog.

 

In moving to online, we've tried to streamline all of our communication through Canvas. The goal is to cut down on disconnected email threads and encourage students to use submission comments to keep questions and feedback in context.

 

The Problem

Many students had already turned off email notifications for most communications in Canvas, preferring not to have any notices, which reduces their responsibility for teacher prompting and revision. Notifications are a user setting and the Canvas admin panel doesn't provide a way to define a default set of notification levels for users. However, with the API, we were able to write a Python program that sets notification prefs by combining the as_user_id query param as an admin that sets user notification preferences.

 

API Endpoints

  • GET user communication channel IDs: /api/v1/users/:user_id/communication_channels
  • PUT channel preferences for user: api/v1/users/self/communication_channels/{channel_id}/notification_preferences/{msg_type}

 

Params

  • Int user_id
  • Int channel_id
  • String frequency

 

Get User IDs

There is no easy way to programmatically get user IDs at the account or subaccount levels without looping each course and pulling enrollments. Instead, we opted to pull a CSV of all enrollments using the Provisioning report through the Admin panel. We configured separate files using the current term as the filter. This CSV included teacher, student, and observer roles. The script limits the notification updates to student enrollments.

 

Script Details

The full program is available in a GitHub gist. Here is an annotated look at the core functions.

 

main handles the overall process in a multi-threaded context. We explicitly define a number of workers in the thread pool because the script would hang without a defined number. Five seemed to work consistently and ran 1500 records (a single subaccount) in about 7 minutes.

 

The CSV includes all enrollments for each student ID, so we created a set to isolate a unique list of student account IDs (lines 9-10).

 

To track progress, we wrapped the set in tqdm. This prints a status bar in the terminal while the process is running which shows the number of processed records out of the total length. This is not part of the standard library, so it needs to be installed from PyPI before you can import it.

 

def main():
    """
    Update Canvas user notification preferences as an admin.
    """

    unique = set()
    data = []
    with open('your.csv', 'r') as inp:
        for row in csv.reader(inp):
            if re.search("student", row[4]):
                unique.add(int(row[2]))

    with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
        with tqdm(total=len(unique)) as progress:
            futures = []
            for student in unique:
                future = executor.submit(process_student_id, student)
                future.add_done_callback(lambda p: progress.update())
                futures.append(future)
           
            results = [future.result() for future in futures]

 

process_student_id is called by the context manager for each student ID in the set. Canvas breaks communication methods into "channels:" email, push, Twitter, etc (line 3). Each channel has a unique ID for each user, so we needed to call each user's communication channels and then pass the ID for emails to a setter function.

def process_student_id(student):
    # Get their communication channel prefs
    channel_id = get_channel_id(student)

    try:
        # Update the channel prefs and return
        update = update_prefs(student, channel_id)
        return update
    except Exception as e:
        print(e)

 

GET communication_channels

def get_channel_id(student_id):
    url = f"https://yourURL.instructure.com/api/v1/users/{student_id}/communication_channels"
    resp = requests.request("GET", url, headers=headers)

    for channel in resp.json():
        # find the ID of the email pref
        if channel['type'] == 'email':
            return channel['id']

 

PUT communication_channels/:channel_id/notification_preferences/:message_type[frequency]

The communication channel can receive several types of communications. We wanted to set the student notifications to "immediately" for new announcements, submission comments, and conversation messages. You can define others as well as their frequencies by modifying the values on lines 3-4.

 

The communication types are not well documented, so  we used our own channel preferences to find the notification strings: GET /users/self/communication_channels/:channel_id/notification_preferences.

 

The crux of this step is to make the request using the Masquerading query param available to the calling user. Make sure the account which generated the API key can masquerade or else the script will return an unauthorized error. 

def update_prefs(student_id, channel_id):
    # loop through different announcement types
    types = ["new_announcement", "submission_comment", "conversation_message"]
    frequency = "immediately"  # 'immediately', 'daily', 'weekly', 'never'
    responses = []

    for msg_type in types:
        url = f"https://elkhart.test.instructure.com/api/v1/users/self/communication_channels/{channel_id}/notification_preferences/{msg_type}?as_user_id={student_id}&notification_preferences[frequency]={frequency}"
        resp = requests.request("PUT", url, headers=headers)

        responses.append(resp)
   
    return responses

 

Final Thoughts

Updating a user's personal preferences isn't something I was thrilled about doing, but given our current circumstances, it was preferable to the alternative of continuing to struggle to help students move forward in their coursework. Further improvements would be to call each CSV in the file system incrementally, cutting down on the time someone has to log in and run the script. Hopefully, this only needs to be done once and does not become a recurring task. Second, there is an endpoint in the API to update multiple communication preferences at once, but it isn't well documented and I wasn't able to get it working reliably. For just one channel and three specific types of messages, the performance improvements probably would have been negligible (at least that's what I'm telling myself).

Hello   

 

I have started to design ready-made canvas design templates for courses. This project I have started as an Open-Source code under MIT (which means free). and anyone can use this. I would love to hear your feedback/suggestions.

 

The cool thing about this project is Zero Dependency - (No need to include any and CSS or js files into your canvas instance)

 

My Github Project: Click Here - CanvasLMSDesigns

Don't forget to check the demo  

Demo

Features

  • Zero Dependency - (No need to include any and CSS or js files into your canvas instance)
  • Compatible with Canvas LMS editor

How to use

  • Go to this file - Design-1/index.html - Click here
  • Copy index.html HTML codes
  • Paste into the Canvas LMS editor

 

 

This is my first design

Gerald Q. Maguire

Creating an Index

Posted by Gerald Q. Maguire Apr 14, 2020

To follow up on my earlier question in Generating an index and permitted attributes for <span> this blog post contains some more information about generating an index from the pages in a Canvas course. A full description, script, and source code can be found under "Making an index" at GitHub - gqmaguirejr/Canvas-tools: Some tools for use with the Canvas LMS. 

 

Basically the process is based on creating in a local directory a copy of all of the HTML pages in a Canvas course along with some metadata about the module items in the course. Once you have the files, you can find keywords and phrases from the HTML and then construct the index or in my case a number of different indexes. I have split the process of finding keywords and phrases into two parts, the first works on the HTML files to find the strings in the various tags and stores this into a JSON formatted file - and the second part is part of the program computes the indexes. In this second program I started by splitting the text into words with a simple regular expression and then switched to using the Python NLTK package - specifically, the functions nltk.sent_tokenize(string1) nltk.word_tokenize(string2).

 

The resulting page (computed from ~850 HTML files) can be seen at Test page 3: Chip sandbox 

 

With regard to <span>, I found it useful to use them in three ways:

1. To keep a set of words together as a logical "word":

<span>Adam Smith</span> <span>Autonomous system number</span>

2. To mark text that I did not want to index:

<span class="dont-index">20% error rate</span>

3. To mark text as a reference (that I do not want to index):

<span class="inline-ref">(see Smith, Figure 10, on page 99.)</span>

Overall, the process of generating an index was useful - as I found mis-spellings, inconsistent use of various terms and capitalization, random characters that seemed to have been typos or poor alternative img descriptions, ...). It also was a nice forcing function to rework some of the content.

 

However, it remains a work in progress. I know that there are a number of weaknesses, such as not being careful in the final index to language tag entries and there is a need to remove some additional words that probably should not be in the index. Also, this is not a general-purpose natural language processing program - it could make better use of the NLTK package and it is very English language centric (in that it assumes the default language of the course is English, it does not pass the actual language information to the tokenization function, and it only contains stop words in English).

 

 

This blog describes how to move user enrollments from one role to another using a Python class, SQL data, and a mapping file.

 

So here is the situation we are presently facing at Everett Public Schools.  Along with our base roles of Student, Teacher, Designer, etc., we also have custom roles that have been derived from those base roles.  These custom roles are a bit more refined and help keep users and there permissions in check.  The problem with this idea is that not everyone follows the rules when assigning a role to a user when that user is enrolled into a course.  This quickly becomes an issue when trying to search and sort users based upon their permissions.

 

Case in point: We have teachers that are enrolled as students in staff courses or portals that are located at their respective school or sub-account.  So are they truly a student in the classic sense?  No.  When you do a blind search for students, you get back a bunch of teachers and maybe a few other users that somebody down the line added to a course as a student.  Now that the user data set has gotten out of hand, how do you move those enrollments over to the new custom role that you just created?  In addition to that, how do you keep it all in sync?

 

The solution comes in a few simple steps which you can follow below.  First, you need to decide what data set of users need to be moved from one role to another?  In our case, we wanted non-students (i.e. district staff) that were currently assigned the base role of StudentEnrollment (aka Student).  These district IDs are the same as their login id and SIS id too, so it keeps things straight.  Since we run multiple nightly integrations, we simply just created a new section in our SQL code to only pull the district staff IDs.  Like this:

/*
STAFF USERS
*/
IF @type = 'STAFF_USERS'
BEGIN
SELECT login_id
FROM eps_canvas.dbo.users
WHERE user_type = 'F';
END

Just a bit of a backstory to explain the logic.  In Everett we use several nightly imports into Canvas to roster courses, control users, etc.  More on that in another blog, but to suffice it to say it works very well.  We use a 'users' table in a smaller database to control who gets put into Canvas.  The user_type of 'F' is for 'faculty'.  So when this script runs, it uses the 'staff_users' input parameter to control what data set the script will receive.  This logic comes from the script configuration .ini file:

 

[Default]

#API SIS upload URL for the site
#Root account should always be 1
rootURL: https://everettsd.beta.instructure.com/api/v1/

#The URL string data that allows acting as another user
#The 'replace' placeholder gets replaced with the correct term in the script
masqueradeData: {"as_user_id": "replace"}

#The list of parameters to pull from the DB
#Use this list to effect the role mappping below
#Comma delimited, any order
dbParams: staff_users

#Text of the SQL Server stored procedure SQL
#For getting of district ids
dbSQL: exec eps_internal.dbo.pyCurGetCanvasCustomExtracts ?

#The endpoint to get enrollments for a user
enrollmentsEndpoint: users/self/enrollments

#The endpoint to enroll the user in the course
coursesEndpoint: courses/{}/enrollments

#The endpoint to get all of the current roles
rolesEndpoint: accounts/1/roles

#The mapping from one role to another for each DB parameter
#The key for each map is keyed off of the dbParams list
#The JSON object for each dbParam is a key of the permission type to find, the value is the role to assign
#All values are case sensitive and must match exactly to what is in Canvas
roleMapping: {"staff_users": {"StudentEnrollment": "Adult Learner"}}

When the script is executed, it looks for an associated configuration file and reads in the [Default] section data.  It does read a master configuration file too so it can set some global variables, but that is outside the scope of this post.  Each parameter is then assigned to an internal variable that the script uses to do its thing.  Jumping down to the bottom line in the file, the roleMapping dictionary is keyed to the dbParams value.  This is how the data set knows what users to process, what role to look for (in this case 'StudentEnrollment') and what role to use when enrolling the user into the current course ('Adult Learner').  If we wanted to process more users this this script workflow, then we add a value to the dbParams list and add the same value to the roleMapping dictionary along with the roles to use.  

 

At some point, we needed to create our 'Adult Learner' role.  We wanted a role that was student based but that could be used for staff members that are fulfilling some student role in a course somewhere.  We wanted the student role to truly reflect actual students in the district.

 

So now we are ready to roll.  Consider this Python class:

 

from requests import Session
from classEpsDB import EpsDB
from classEpsException import EpsException
from classEpsConfiguration import EpsConfiguration
from json import loads
from urllib import parse


class EpsITSyncCanvasEnrollments(object):
    """
    Syncs the Canvas enrollments between what was assigned to a user and what should be the correct assignment.
    We do this to keep users from getting the incorrect enrollment and streamlining the search process.
    @package: epsIT
    @license: http://opensource.org/licenses/GPL-3.0
    @copyright: 2020, Everett Public Schools
    @author: DPassey
    @version: 1.0, 02.24.2020
    """

    def __init__(self, user_id_type='sis_user_id'):
        """
        Class initializer.
        Parses the config file_name, assigning values as needed.
        @raise exception: EpsException
        """
        try:
            cfg = EpsConfiguration(f"{self.__class__.__name__}.ini")
            self.rc = 0
            if not cfg.db_dsn: raise Exception(f"{self.__class__.__name__}.__init__. DSN data source is missing.")
            for k in cfg.locals:
                k = k.upper().strip()
                v = cfg.locals[k].strip()
                if k == 'DBSQL': db_sql = v
                if k == 'DBPARAMS': param_list = v.split(',')
                if k == 'ROOTURL': root_url = v
                if k == 'MASQUERADEDATA': masquerade = v
                if k == 'ENROLLMENTSENDPOINT': enroll_endpoint = v
                if k == 'COURSESENDPOINT': course_endpoint = v
                if k == 'ROLEMAPPING': roles_map = loads(v)
                if k == 'ROLESENDPOINT': roles_endpoint = v

            # set the session header
            self.header = {'Authorization': f'Bearer {cfg.canvas_token}'}

            # must be one of these
            if user_id_type not in ('sis_user_id', 'sis_login_id'): raise Exception(f'{self.__class__.__name__}.__init__. Invalid parameter: {user_id_type}.')

            # create a session
            with Session() as self.session:
                # get the type of user from the parameter list
                for _ in param_list:
                    # get all of the active roles
                    url = f"{root_url}{roles_endpoint}"
                    # for each mapped role for this parameter, get the role's id
                    roles_dict = self.get_account_roles(url, roles_map[_])
                    # get the data to process for each parameter
                    data = self.get_data(cfg.db_dsn, db_sql, _)
                    # proceed if we get user data
                    if data:
                        # for each user in the data, find the applicable enrollments to move
                        for user in data:
                            # set up masquerading
                            self.data_dict = loads(masquerade.replace('replace', "{}:{}".format(user_id_type, user[0])))
                            # get all of the user's enrollments to see if we need to change enrollments
                            user_dict = self.get_enrollments(f"{root_url}{enroll_endpoint}", roles_map[_])
                            # now process the users by their Canvas id
                            for user_id in user_dict:
                                # process each course and re-enroll the user
                                # we need to keep the indexing linked between course and enrollment
                                for c, course in enumerate(user_dict[user_id]['courses']):
                                    # get the role id of the new role
                                    # need this to move enrollments
                                    role_id = roles_dict[user_dict[user_id]['roles'][c]]
                                    # get the current enrollment id
                                    enroll_id = user_dict[user_id]['enrollments'][c]
                                    endpoint = course_endpoint.format(course)
                                    # now set the new enrollments
                                    self.set_enrollment(f"{root_url}{endpoint}", user_id, role_id, enroll_id)
        except:
            EpsException(__file__)

    def get_data(self, dsn, sql, param):
        """
        Executes the stored procedure and gets the applicable data set.
        @param dsn: String
        @param sql: String
        @param param: String
        @return: List
        @raise exception: EpsException
        """
        try:
            db = EpsDB(dsn)
            if not db: raise Exception(f"{self.__class__.__name__}.get_data. Could not connect to database.")
            rs = db.get(sql, param)
            if not rs: raise Exception(f"{self.__class__.__name__}.get_data. No data set returned.")
            return rs
        except:
            EpsException(__file__)

    def get_account_roles(self, url, role_dict):
        """
        Gets the active roles and puts them in a roles dictionary.
        @param url: String
        @param role_dict: Dictionary
        @return Dictionary
        @raise exception: EpsException
        """
        try:
            role_id_dict = {}
            # get all active roles
            data_dict = {'state[]': 'active', 'per_page': 100}
            resp = self.session.get(url, data=data_dict, headers=self.header)
            if resp.status_code == 200:
                # check the headers "link" attribute for the last relational link
                for link in resp.headers['Link'].split(','):
                    if 'rel=last' in link.replace('"','').replace("'",'').lower():
                        # grab the total pages count by parsing out the url parts and convert to int
                        page_total = int(parse.parse_qs(parse.urlparse(link.split(';')[0])[4])['page'][0])
                        # we need to get all results since we are being paginated
                        # these sections perform the same logic, just easier to to write it this way
                        if page_total > 1:
                            p = 1
                            while p <= page_total:
                                data_dict.update({'page': p})
                                resp = self.session.get(url, data=data_dict, headers=self.header)
                                json = loads(resp.text)
                                for _ in json:
                                    if _['role'] in role_dict.values(): role_id_dict[_['role']] = _['id']
                                p += 1
                        else:
                            json = loads(resp.text)
                            for _ in json:
                                if _['role'] in role_dict.values(): role_id_dict[_['role']] = _['id']
            else: raise Exception(f"{self.__class__.__name__}.get_account_roles. Response {resp.text} returned.")
            return role_id_dict
        except:
            EpsException(__file__)

    def get_enrollments(self, url, map_dict):
        """
        Gets the roles for the user and place in a user dictionary.
        @param url: String
        @param map_dict: Dictionary
        @return Dictionary
        @raise exception: EpsException
        """
        try:
            user_list = []
            enrollments_list = []
            roles_list = []
            user_dict = {}
            # make a copy of the class data dictionary so we can update it
            data_dict = self.data_dict.copy()
            # we should never exceed the per_page value
            # i mean really....over 100 enrollments?
            # current_and_future is a special state for all courses, published and unpublished
            data_dict.update({'state[]': 'current_and_future', 'per_page': 100})
            resp = self.session.get(url, data=data_dict, headers=self.header)
            if resp.status_code == 200:
                json = loads(resp.text)
                for _ in json:
                    # check if user is enrolled in the course per the map_dict keys
                    if _['role'] in map_dict:
                        user_id, course_id, enroll_id = [_['user_id'], _['course_id'], _['id']]
                        user_list.append(course_id)
                        enrollments_list.append(enroll_id)
                        roles_list.append(map_dict[_['role']])
                # build the user enrollment dictionary for those mapped roles
                if user_list: user_dict = {user_id: {"courses": user_list, "enrollments": enrollments_list, "roles": roles_list}}
            else: raise Exception(f"{self.__class__.__name__}.get_enrollments. Response {resp.text} returned.")
            return user_dict
        except:
            EpsException(__file__)

    def set_enrollment(self, url, user_id, role_id, enroll_id):
        """
        Sets the user enrollment for the course by deleting the original enrollment, making a new one.
        @param url: String
        @param user_id: Int
        @param role_id: Int
        @param enroll_id: Int
        @raise exception: EpsException
        """
        try:
            # now we enroll the user in the proper role
            # we keep the enrollment type blank so the role id will override the base enrollment
            data = {"enrollment[user_id]": user_id, "enrollment[type]": '', "enrollment[role_id]": role_id, "enrollment[enrollment_state]": "active"}
            resp = self.session.post(url, data=data, headers=self.header)
            if resp.status_code == 200:
                # do not change the url as we want to delete the old enrollment now
                resp = self.session.delete(f"{url}/{enroll_id}", data={"task": "delete"}, headers=self.header)
                if resp.status_code == 200: self.rc += 1
                else: raise Exception(f"{self.__class__.__name__}.set_enrollment. Response {resp.text} returned.")
            else: raise Exception(f"{self.__class__.__name__}.set_enrollment. Response {resp.text} returned.")
        except:
            EpsException(__file__)


# end of class
x = EpsITSyncCanvasEnrollments()
print(x.rc)

This is the flow:

  1. Read in the configuration .ini files, one that is global (the EpsConfiguration class) and one that named the same as this class
  2. Assign the configuration values to class values
  3. Query the database for the data set of user login ids
  4. Get a data set of all of the roles that currently exists in our Canvas instance
  5. For each user, act as that user and get all of the current and future enrollments
  6. Using the mapping dictionary, find each enrollment that we need to change and get the role id value from the list of roles that were grabbed earlier
  7. For each enrollment that is applicable for the user, enroll the user in the new role for the course and set it to active and then delete the old enrollment

 

And there you go.  You have moved all of your applicable enrollments over to the new one without having to do it manually.  Setting this script up as a regular job, depending on your needs of course, will ensure that your Canvas user role assignments don't get out of control.

I find the current system of emails of newly submitted assignments to be almost worthless, as I am in a number of courses where there are large numbers of students and most of them are irrelevant from my point of view as a teacher. In these courses, sections have been created to make it easy for a teacher to view the subset of students that is actually relevant to the teacher. However, since I have a large number of such courses (i.e., more than a dozen) and students are submitting material at their own pace through these courses, it is difficult to find the wheat among the chaff of notices about submissions for each of these courses.

 

This motivated the design of a program to get information about just the assignment submissions that I am interested in. Of course one can easily get a list of all the courses that a user is in, but how can you know what sections within these courses a user is interested in?  The answer is to ask the user to provide this information!

 

The result is two programs:

  1. create_JSON_file_of_sections_in_your_courses.py
  2. list_ungraded_submissions_in_your_courses_JSON.py

The first program creates a JSON formatted file with a course_info dictionary of the form:

{"courses_to_ignore": dict_of_courses_to_ignore,

"courses_without_specific_sections": dict_of_courses_without_specific_sections,

"courses_with_sections": dict_of_courses_with_sections

}

 

courses_to_ignore are courses that the user wants to ignore
courses_without_specific_sections are courses where the user is responsible for all the students in the course
courses_with_sections are courses where the user has a specific section - the specific section's name may be the user's name (in Canvas) or some other unique string (such as "Chip's section"). Because the name of the relevant section can be arbitrary, this file is necessary to know which section belongs to a given user.

 

The second program reads the information from the JSON file and then prunes out the courses_to_ignore from the list of a user's courses and then uses the information from courses_without_specific_sections and courses_with_sections to iterate through the courses and looks for ungraded assignments and then for each of the relevant students (in a course or section) looks for an ungraded assignment. Currently, the program just outputs information about these assignments.

 

To set up the JSON file is easy, you simply run the first program and then move entries from the courses_with_sections dict to one of the other dicts (removing unnecessary or irrelevant sections as you go). You can fun the first program in update mode (with the -U flag) to add more courses - it remembers the courses you have set to be ignored and the ones you have responsibility for all the students.

 

The programs can be found at GitHub - gqmaguirejr/Canvas-tools: Some tools for use with the Canvas LMS. 

 

Of course, I discovered an assignment that had been submitted that I had not seen, so on to grading it!

For some time I have been running a local Canvas instance for development activities. This has enabled me to both peek under the covers and give a VM with a complete Canvas instance and programs that I have developed to students.

 

During the summer I noticed that after updating the code using the github Canvas sources that I had a flashing dashboard that would never render a static dashboard and that when I went to the assignments page I could not see the list of assignments.

When using the inspector in the browser I could see the results of the query return the JSON for the assignments in the course. However, nothing appeared.

After some looking at the page for assignments, I found that the class where I expected to see the assignments there was a div that included "hide-content-while-scripts-not-loaded" and then searching in the source code (using find) I found the following:

find . -type f -exec grep hide-content-while-scripts-not-loaded {} \; -print   @body_classes << 'hide-content-while-scripts-not-loaded' ./app/views/assignments/new_index.html.erb       @body_classes << 'hide-content-while-scripts-not-loaded' ./app/views/courses/show.html.erb   @body_classes << 'hide-content-while-scripts-not-loaded right-side-optional' ./app/views/announcements/index.html.erb   @body_classes << 'hide-content-while-scripts-not-loaded' ./app/views/discussion_topics/index.html.erb   @body_classes << "full-width no-page-block hide-content-while-scripts-not-loaded" ./app/views/calendars/show.html.erb

So this hiding of contents occurs in a number of places, but I could not find the CSS.
After a bit of searching, I found at https://code.vt.edu/griffc1/canvas-lms/blob/de9d56b7f0f8b1818d9f161c737c86744e17b756/app/stylesheets/base/_layout.sass

// This hides stuff till the javascript has done it's stuff .hide-content-while-scripts-not-loaded   #content, #right-side-wrapper     +single-transition(opacity, 0.3s)     +opacity(1) .scripts-not-loaded   #content, #right-side-wrapper     +opacity(0)

The above means that the results are purposely hidden until some javascript has been loaded.

Additionally, using the inspector in the brower I saw the following when trying to display the page for assignments for a course:

assignment_index.js:14 Uncaught (in promise) Error: Cannot find module '@instructure/js-utils'     at webpackMissingModule (assignment_index.js:14)     at eval (assignment_index.js:14)     at Module.sMe2 (assignment_index-c-9c2eac0849.js:1941)     at __webpack_require__ (main-e-a68344b004.js:64)

Going to the docker container where the webpack is built I did a yarn run webpack. In this I found:

ERROR in ./app/jsx/bundles/dashboard_card.js Module not found: Error: Can't resolve '@instructure/js-utils' in '/usr/src/app/app/jsx/bundles'  @ ./app/jsx/bundles/dashboard_card.js 22:0-65 40:33-39 40:40-56  @ ./node_modules/bundles-generated.js  @ ./app/jsx/main.js  ERROR in ./app/jsx/bundles/assignment_index.js Module not found: Error: Can't resolve '@instructure/js-utils' in '/usr/src/app/app/jsx/bundles'  @ ./app/jsx/bundles/assignment_index.js 29:0-57 91:0-16  @ ./node_modules/bundles-generated.js  @ ./app/jsx/main.js  ERROR in ./app/jsx/dashboard/DashboardHeader.js Module not found: Error: Can't resolve '@instructure/js-utils' in '/usr/src/app/app/jsx/dashboard'  @ ./app/jsx/dashboard/DashboardHeader.js 37:0-65 283:27-33 283:34-50  @ ./app/jsx/bundles/dashboard.js  @ ./node_modules/bundles-generated.js  @ ./app/jsx/main.js  ERROR in ./app/jsx/discussions/apiClient.js Module not found: Error: Can't resolve '@instructure/js-utils' in '/usr/src/app/app/jsx/discussions'  @ ./app/jsx/discussions/apiClient.js 19:0-66 28:9-16 28:17-33  @ ./app/jsx/discussions/actions.js  @ ./app/jsx/discussions/components/DiscussionsIndex.js  @ ./app/jsx/discussions/index.js  @ ./app/jsx/bundles/discussion_topics_index_v2.js  @ ./node_modules/bundles-generated.js  @ ./app/jsx/main.js

The above means that the js-utils are not found, despite the fact that this is a package as one can see from the output of the command "ls packages":

babel-preset-pretranslated-format-message canvas-planner canvas-rce canvas-supported-browsers jest-moxios-utils js-utils k5uploader old-copy-of-react-14-that-is-just-here-so-if-analytics-is-checked-out-it-doesnt-change-yarn.lock

Similar to the solution in posting https://github.com/instructure/canvas-lms/issues/1318 (Links to an external site.) 

The solution is to add in the docker-compose.override.yml file the following to the services -> jobs -> volumes key :
- js-utils:/usr/src/app/packages/js-utils

and then to the volumes key father down the file add this::
js-utils: {}

This fixes the problems with dashboard and assignments!

I also notice that another module ('canvas-planner') that is in packages also has problems during the yarn run webpack:

ERROR in ./packages/canvas-planner/lib/actions/index.js Module not found: Error: Can't resolve '@instructure/js-utils' in '/usr/src/app/packages/canvas-planner/lib/actions'  @ ./packages/canvas-planner/lib/actions/index.js 22:0-66 101:18-25 101:26-42  @ ./packages/canvas-planner/lib/index.js  @ ./app/jsx/dashboard/DashboardHeader.js  @ ./app/jsx/bundles/dashboard.js  @ ./node_modules/bundles-generated.js  @ ./app/jsx/main.js  ERROR in ./packages/canvas-planner/lib/actions/loading-actions.js Module not found: Error: Can't resolve '@instructure/js-utils' in '/usr/src/app/packages/canvas-planner/lib/actions'  @ ./packages/canvas-planner/lib/actions/loading-actions.js 24:0-66 82:18-25 82:26-42 158:16-23 158:24-40  @ ./packages/canvas-planner/lib/actions/index.js  @ ./packages/canvas-planner/lib/index.js  @ ./app/jsx/dashboard/DashboardHeader.js  @ ./app/jsx/bundles/dashboard.js  @ ./node_modules/bundles-generated.js  @ ./app/jsx/main.js

My hypothesis is that a similar approach can be used to solve this problem. However, since the output of the yarn run webpack also show the following (edited to reduce the mass of output)

ERROR in ./packages/canvas-planner/lib/actions/loading-actions.js Module not found: Error: Can't resolve '@instructure/js-utils' in '/usr/src/app/packages/canvas-planner/lib/actions'  @ ./packages/canvas-planner/lib/actions/loading-actions.js 24:0-66 82:18-25 82:26-42 158:16-23 158:24-40  @ ./packages/canvas-planner/lib/actions/index.js  @ ./packages/canvas-planner/lib/index.js  @ ./app/jsx/dashboard/DashboardHeader.js  @ ./app/jsx/bundles/dashboard.js  @ ./node_modules/bundles-generated.js  @ ./app/jsx/main.js  ERROR in ./app/coffeescripts/media_comments/js_uploader.js Module not found: Error: Can't resolve '@instructure/k5uploader' in '/usr/src/app/app/coffeescripts/media_comments'  @ ./app/coffeescripts/media_comments/js_uploader.js 21:0-49 106:26-36 123:26-36  @ ./public/javascripts/media_comments.js  @ ./app/jsx/runOnEveryPageButDontBlockAnythingElse.js  @ ./app/jsx/main.js  ERROR in ./packages/canvas-rce/lib/bridge/Bridge.js Module not found: Error: Can't resolve '@instructure/k5uploader' in '/usr/src/app/packages/canvas-rce/lib/bridge'  @ ./packages/canvas-rce/lib/bridge/Bridge.js 21:0-49 69:38-48 ...  ERROR in ./packages/canvas-rce/lib/rce/ResizeHandle.js Module not found: Error: Can't resolve 'react-draggable' in '/usr/src/app/packages/canvas-rce/lib/rce'  @ ./packages/canvas-rce/lib/rce/ResizeHandle.js 22:0-48 65:27-40 ...    ModuleDependencyWarning: "export 'passthroughProps' was not found in '@instructure/ui-react-utils' ... ,   ModuleDependencyWarning: "export 'passthroughProps' was not found in '@instructure/ui-react-utils' ...  ]  98% after emitting SizeLimitsPlugin[ ModuleDependencyWarning: "export 'addInputModeListener' was not found in '@instructure/ui-dom-utils' ...,   ModuleDependencyWarning: "export 'passthroughProps' was not found in '@instructure/ui-react-utils' ...,   ModuleDependencyWarning: "export 'passthroughProps' was not found in '@instructure/ui-react-utils' ...

It makes me curious as to why all of these missing files include the path "@instructure". Is there some error in the configuration that leads to the packages not being found (despite the fact that doing a "yarn list" showed that "@instructure/js-utils" was installed)?.

 

I should note that I am a novice with respect to Javascript - so some of the problems might be operator error, but the Canvas source code was freshly installed via the quick start update script.

We've been working for a while on leveraging the Canvas API to work with other systems for particular learning use cases. We're developing a middleware app using ASP.NET Core MVC to manage the integrations.

 

We've been using the access tokens that each Canvas user can generate to work with the API. This is fine for development and testing but when we need to extend usage we want to avoid requesting users create their own tokens. A neater solution is to authenticate directly into Canvas using OAuth and, from this, get a token for the logged in user that can be used for subsequent API calls. This maintains the context based security that is a key feature of the access token.

 

Before I get into the steps to to getting OAuth to work in ASP.NET Core MVC and the intricacies of connecting to Canvas I'll give you a link to a GitHub repo that contains a very simple example. This is not production code and is an example only.

 

I also want to acknowledge the series of posts by Garth Egbert on the OAuth workflow in .NET. I wouldn't be writing this now if it wasn't for Garth. I also got a lot of help from this post by Jerrie Pelser that works through an example of using OAuth2 to authenticate an ASP.NET Core App with Github.

 

Getting Started

In this example I'm using a local instance of Canvas running as a Docker container. If you want to follow along then install Docker Desktop. Then download and run lbjay's canvas-docker container. This container is designed for testing LTIs and other integrations locally and comes with default developer keys:

  • developer key: test_developer_key
  • access token: canvas-docker

 

You can also log in to the Canvas instance and add your own developer keys if you want to.

 

Other thing that you'll need to started is an IDE of your choice. I'll be using Visual Studio 2019 Community edition but you could use Visual Studio Code or another tool that you prefer.

 

Step 1 - Make sure that the test version of Canvas is running

Start Docker Desktop and load the canvas-docker container. Once it has initialised it is available at http://localhost:3000/ 

 

The admin user/pass login is canvas@example.edu / canvas-docker.

 

Step 2 - Create a new ASP.NET MVC Core 2.2 application

Start Visual Studio 2019 and select Create a new project.

 

Visual Studio Start Screen

Select ASP.NET Core Web Application.

Visual Studio Project type screen

Set the Project name.

Visual Studio Project Name

In this case we're using an MVC application so set the type to Web Application (Model-View-Controller). Make sure that ASP.NET Core 2.2 is selected and use No Authentication as we're going to use Canvas.

Visual Studio project sub type

 

Step 3 - Let's write some code

 OAuth requires a shared client id and secret that exists in Canvas and can be used by an external app seeking authentication. The canvas-docker container has a developer key already in it but you can add your own. 

 

The default key credentials are:

Client Id: 10000000000001

Client Secret: test_developer_key

 

You can get to the developer keys by logging in to your local instance of Canvas and going to Admin > Site Admin > Developer Keys.

 

Now we need to store these credentials in our web app. For this example we'll put them in the appsettings.json file. You can see the code that we've added in the image below. Please note that in proper development and production instances these credentials should be stored elsewhere. Best practice for doing this is described here: Safe storage of app secrets during development in ASP.NET Core.

 

app settings json

In this case Canvas is the name of the authentication scheme that we are using.

 

Now the configuration for OAuth2 happens mostly in the startup.cs file. This class runs when the app is first initialised. Within this class is public void method called ConfigureServices in which we can add various services to the application through dependency injection. The highlighted zone in the image below shows how to add an authentication service and configure it to use OAuth.

 

Startup config

The basic process is to use services.AddAuthentication and then set a series of options. Firstly we set the options to make sure the DefaultAuthenticationScheme is set to use Cookies and the DefaultSigninScheme is also set to use cookies. We set the DefaultChallengeScheme to use the Canvas settings from the appsettings.json file.

 

We can chain onto that a call to AddCookie(). And then chain onto that the actual OAuth settings. As you can see we set "Canvas" as the schema and then set options. The options for ClientId and ClientSecret are self explanatory. The CallBackPath option needs to set to be the same as that in the Redirect URI in the key settings in Canvas. You may need to edit the settings in Canvas so they match. The image below shows where this is located.

 

Callback URI

 

The three end points are obviously critical. The AuthorizationEndpoint and the TokenEndpoint are described in the Canvas documentation. The Authorization enpoint is a GET request to login/oauth2/auth. As you can see, there are various parameters that can be passed in but we don't really need any of these in this case.

 

The Token endpoint is a POST request to login/oauth2/token. Again, there are various parameters that can be passed in but we don't really need any here.

 

The UserInformationEndpoint was the hardest endpoint to work out. It is not explicitly mentioned in the documentation. There is a mention in the OAuth overview to setting scope=/auth/userinfo. I couldn't get that to work but I may have been overlooking something simple. In the end it became apparent that we would need an endpoint that returned some user information in JSON format. There is an API call that does just that: /api/v1/users/self 

 

The AuthorizationEndpoint and the TokenEndpoint are handled automatically by the OAuth service in the web app. The UserInformationEndpoint is called explicitly in the OnCreatingTicket event. But before we get there we need to make sure that we SaveTokens and Map a JSON Key to something that we'll eventually get back when we call the UserInformationEndpoint.  Here we are mapping the user id and name Canvas.

 

That brings us on to the Events. There are several events that can be coded against including an OnRemoteFailure event. For simplicity's sake we've just used the OnCreatingTicket event which, as it's name suggests, occurs when Canvas has created a ticket and sent it back. 

 

In this event we set a new HttpRequestMessage variable to call the UserInformationEndpoint with a GET request. We need to add Headers to the request. The first tells the request to expect a JSON object. The second is the Access Token that Canvas has sent back to the web app for this user.

 

All that is left to do set a response variable to get the values back from Canvas for user information, we call the EnsureSuccessStatusCode to make sure we got a good response back, parse the JSON with user info and then run RunClaimActions to allocate name and id into the web app's authentication.

 

There is one other thing that we need to do on the startup.cs class. There is a public void Configure method in which we tell the app to use various tools and resources. In this file we need to add app.UseAuthentication() to tell the app to use Authentication. This call should come before the app.UseMVC() call.

 

Use Authentication

So, now the app is set up to use OAuth with Canvas. We just need a situation to invoke it and show the outcome.

 

To do this we will create a LogIn action in a new Controller. So create a new Controller class in the Controllers folder and call it AccountController.cs. In this controller we will add a LogIn Action.

 

Account controller

 

This Action will be called when the browser makes a get request to the Account/Login path. It returns a Challenge response which effectively kicks off the process of going to Canvas and authenticating which is what we just configured in startup.cs.

 

To call this Action I've added a link to the Shared/_Layout.cshtml file so that it appears on every page.

Login link

This basically renders as a link to the Login Action of the Account controller.

 

Now to see whether the user has successfully logged in and what their name is I've modified the Home/Index.cshtml file as follows: 

 

Index page with log in details

If the user is logged out the page will say "Not logged in". If the user is logged in the page will say "Logged in XXXX" where XXXX is the user's name in Canvas.

 

Step 4 - Test

 

Now when we run the application we get a plain looking standard web page but it does have a Log in with Canvas link and a statement saying we are not currently logged in.

Testing the integration

When we click the Log In with Canvas link we get sent to the Canvas Log in page (assuming we are not already logged in to Canvas). 

 

Testing the integration - Canvas login

 

The user is then asked to agree to authorize the calling web app. Note that the name, icon and other details are all configurable within the associated Canvas Developer key.

 

Authenticate

 

After which they are the taken back to the web app having been authenticated. Completion

 

Note that in this containerized instance of Canvas the default admin user has 'canvas@example.edu' set as their name which is why an email address is being shown. This would normally be their proper name in Canvas.

 

Summing up

If you are an ASP.NET Core developer looking to use OAuth with Canvas then this will, hopefully, have provided a starting point for you to get your own integrations working. It was a bit of struggle at times but half of that was returning to ASP.NET after some time away so there's been a fair bit of relearning done as well as quite a bit of new learning. I'm sure there are a heap of improvements that can be made. I'd love to hear suggestions.