You are here

Virtual Design Challenge for Authenticating and Protecting Full Motion Videos

The Virtual Design Challenge is a competition that asks participants to address the critical need to detect fake full motion videos and protect authentic full motion videos.
Hosted by
Patriot One Technologies Inc. in collaboration with Blockchain@UBC with the financial support of the Canadian Government.

Challenge: Detect fake full motion videos and protect authentic full motion videos

How the challenge works:

1. FORM A DESIGN TEAM

Form a team consisting of students and/or professionals. It’s also okay to have a team of one. Register a team by October 15th.

2. DEVELOP A SOLUTION

Describe a solution that addresses one or both of the streams described below. There will be an online Q&A session on October 29th to answer any questions you may have about the design challenge and to provide guidance on developing your solution.

Submit a solution by November 15th

3. SUBMIT BY DEADLINE

Submit solutions by November 15th. By November 22nd, three teams will be selected to present their solutions at a design challenge event to be held at the University of British Columbia on December 3rd, 2019. The winning solution will be chosen at this event and will receive a first place prize of $6000. Second place and third place prizes will also be chosen at the event and will receive $2500 and $500 respectively.

Quick Process Overview


Design Challenge Stream 1- Identifying Adversarial Attacks Embedded within Full Motion Video

Overview:

Full Motion Video (FMV) is a rich data source that can support the use of powerful analytics to extract meaningful information from the environment. In a security application, FMV can be used to identify and track persons of interest or detect prohibited items as they enter a protected site. The modern machine learning algorithms that power these analytics are susceptible to adversarial attacks introduced to the system by a malicious actor. It is crucial in mission critical systems that these attacks are identified and appropriately handled. Participants are invited to design a method that identifies these attacks as well as producing a prototype that tests their design on real data.

Design:

The design should include the following elements:

1. Participants should research the various ways a video system may be targeted with an adversarial attack.

2. A method to identify whether the FMV, in part or on the whole, contains an adversarial attack. It should be noted which vectors of attack that it identifies and also note any weaknesses present.

Prototype:

The prototype should involve the use of either image or video dataset that includes examples of an adversarial attack (For an example dataset please see: https://www.kaggle.com/c/nips-2017-defense-against-adversarial-attack/overview ).

Design Challenge Stream 2- Writing Smart Contracts to Secure Full-Motion Video Archives

Overview:

FMV is a crucial asset when making operational decisions. Massive FMV has been captured and stored with various automated tools for retrieval and processing. After FMV is captured, a malicious actor or insider threat can perform adversary attacks. How shall we design a blockchain-based system to protect FMV from tampering during storage, retrieval and processing operations?

Design:

The design should include the following elements:

1. Storage

The goal of this section is to develop a secure, reliable and efficient data storage and retrieval mechanism using blockchains. This may involve on-chain, off-chain storage, data compression, access control etc.

2. Video integrity

This section involves proposing solutions for preserving video integrity, tempering and spoofing detection through various mechanism like video hashing, watermarking and  machine learning.  Blockchain’s immutability and distributed nature can be leveraged to achieve this goal.

The above mentioned goals can be achieved by designing applications which involve secure storage, retrieval and processing of data from distributed storage/blockchains. 

Prototype:

 This phase may involve writing smart contracts, programming effective hashing mechanism, watermarking and/or implementing machine learning models. Prototypes developed in this phase may use Ethereum testnet or Hyperledger platform. A similar prototype is described in https://arxiv.org/abs/1904.12059.   

Resources

BBC Queen Trailer—YouTube. 

Cyclone Vardah winds so powerful this bus flips over

Don’t Fall For This Fake Viral Video Of Hurricane Irma. 

Emily K fox news.pdf. 

Fake cave-diving photos, videos spread online amid Thai rescue mission | CTV News. 

Fake Content Spreads Online During Thai Cave Rescue Operation. 

Fake News Site Fools Facebook Live, Corrupts Twitter Search. 

Firing of Shirley Sherrod. (2019). In Wikipedia. 

Fox News displays old campaign footage to claim Palin is getting ‘huge crowds’ at her book signings. 

Fox News screws up again—Macleans.ca. 

French election.

Google News Initiative Training Center. 

Hendry Moya Duran irma—Facebook Search.

How do I reference an online video (e.g., YouTube, TED Talk, or webinar) in APA style? 

Humans and AI team up to improve clickbait detection.

Jeffrey Friedl’s Image Metadata Viewer. 

Media Relations | Fox News. 

Obama speech out of context, Fox News out of control [w/Sean Hannity]—YouTube. 

Obama speech out of context, Fox News out of control [w/Sean Hannity]—YouTube

Obama’s passionate political speech could have unintended consequences. (2018). 

Queen Elizabeth—Photoshoot by Annie Leibovitz—YouTube. 

Queen Elizabeth—Photoshoot by Annie Leibovitz—YouTube. 

Sean Hannity Confesses Using Fake Footage: "Jon Stewart Was—YouTube.

Shocking photos, video show Egyptian protesters pushing armored police vehicle off bridge—The Washington Post. 

Storyful’s best practices for verifying social media content | International Journalists’ Network. 

The National—CBC Television; Toronto—ProQuest. 

Verification Archives. 

Video of Police Van falling off bridge فيديو واضح لسقوط المدرعة من كوبري أكتوبر -. 

Virtual Design Challenge Judging Criteria and Information

Judging Criteria: Points Rubric

We’re not expecting a full blown solution to the use case though high marks if you can do it!). Here’s what our judges will be looking for on a scale of 1-5, with 5 being the highest score:

Criteria

Level 1

Level 2

Level 3

Level 4

Level 5

Context and Relevance

How well the solution demonstrates an understanding of the use case, and the context of the use case

The context and relevance of the solution is not clearly presented or demonstrates a lack of understanding.

(e.g., The team fails to explain why their approach is necessary to solve(s) the use case. The application of the approach(s)seems unnecessary and a little gratuitous.).

The context and relevance of the solution is somewhat clear.

The team is struggling to explain why their approach(es) is/are necessary to address the use case in a compelling manner, but they give a good shot.

The solution demonstrates an adequate level of understanding about the context(s) of the use case and the relevance of the solution as a response the to the use case challenge.

The team is able to somewhat explain why their approach(es) is/are necessary to address the use case in a compelling manner.  

The context and relevance of the solution is clearly presented, and demonstrates a good understanding of the operational, technical, legal, social, and economic contexts in which the solution must be implemented. References to appropriate background documents relevant to the contexts of the solution are cited in the submission. 

The team is able to adequately explain why their approach(es) is/are necessary to address the use case in a compelling manner.  

The context and relevance of the solution is clearly presented, and demonstrates an excellent and sophisticated understanding of the operational, technical, legal, social, and economic contexts in which the solution must be implemented. It is evident that background research on the contexts of the solution has been undertaken by the team.

The team is able to explain why their approach(es) is/are necessary to address the use case(s) in a compelling manner.  It’s very clear that their approach(es) will solve a problem where other technologies will not.

Economic & Social Benefit

How well the solution demonstrates economic and social benefit

Proposed solution does not address its economic and social benefits.

Proposed solution somewhat addresses its economic and social benefits.

Proposed solution adequately addresses its economic and social benefits.

Proposed solution elegantly addresses its economic and social benefits, making a solid case for the solution design in relation to these benefits.

Proposed solution elegantly addresses economic and social benefits, making a very strong case for the solution design in relation to these benefits.

Security & Privacy

The solution overlooks security & privacy considerations.

The team solution somewhat considers security & privacy considerations. Privacy & security appear to be afterthoughts.

The team solution clearly and effectively incorporates security & privacy considerations.

The team solution clearly and effectively  

incorporates

security & privacy considerations to a high standard.

The team solution clearly and effectively incorporates security & privacy considerations to a very high standard, including demonstrating adherence to 27001 standards and principles of privacy by design.

Usability & Convenience

How usable the solution is in relation to standard software usability criteria, such as intuitive design, ease of learning, efficiency of use, memorability, error frequency and severity, subjective satisfaction.

The solution does not address usability & convenience.

The solution gives only a passing nod to usability and convenience. Usability & convenience appear to be afterthoughts.

The solution demonstrates a reasonable level of thought about usability & convenience. The discussion of these elements is convincing. 

The solution demonstrates careful attention to usability & convenience. 

The solution demonstrates careful and sophisticated attention to usability & convenience, and references authoritative supporting or background documents such as government design standards.

Feasibility

How relatively easy the solution would be to implement from an operational,  technical, legal, social and economic perspective

The solution is impractical.

The solution is not feasible in the current operational, technical, legal, social or economic context, but could be with some changes.

The solution is feasibility in relation to one aspect of the operational, technical, legal, social or economic context, but would require adjustment to be feasible along the other dimensions.

The solution is feasibility in relation to more than one aspect of the operational, technical, legal, social or economic context, but would require adjustment to be feasible along the other dimensions.

The solution is feasibility in along all dimensions.

Creativity & Presentation

How creative the solution is, and how engaging the final presentation on the solution is, including how clearly and succinctly it conveys the highlights of the use case(s) and how the design addresses the above criteria as a solution to the use case problems

The design solution does not demonstrate a significantly new approach.

The team presentation does not effectively tell the story about how their solution will solve pain points. 

The design solution demonstrates an incrementally new approach.

The team presentation somewhat tells the story about how their solution will solve pain points.

The design solution is somewhat novel and demonstrates creative thinking.

The team presentation clearly and effectively tells the story about how their solution will solve pain points. The presentation needs a little more polish but is otherwise very good.

The design solution is novel and demonstrates creative thinking.

The team presentation clearly and effectively tells the story about how their solution will solve pain points. The presentation is polished.

The design solution is very innovative and demonstrates a high degree of creativity.

The team presentation clearly and effectively tells the story about how their solution will solve pain points. The presentation is very professionally presented.

The Judging Process

The judging process will be in two stages:

Stage 1

Participants will submit their design solution materials by 11:59 on November 15, 2019 using the link provided.  A panel of judges will assess the solutions against the material presented and any other artifacts that are uploaded or made available (e.g., videos, documentation, etc.).  

By Friday, November 22, teams will be notified about whether they have been selected as finalists. There will be three finalists chosen.  At least one representative from the chosen teams must attend the event at which the winners will be chosen, which will be held at the University of British Columbia on December 3, 2019

Stage 2

The top three teams--the finalists--will be invited to give a 20-minute presentation on their design solution at the December 3 UBC event. In anticipation of being chosen as finalists, teams should block the time to attend the conference now.  Teams unable to present in person at the event may present remotely via videolink. The organizers of the Design Challenge are not able to sponsor travel to enable teams to present in person.  Teams who cannot guarantee that they will be able or willing to present on the day of the conference, either in person or via videolink, cannot be selected as finalists.

A panel of judges plus participants in attendance at the event will assess the solutions of the finalists based on the presentations given on the day and collectively make a final determination of the winning team on the day of the presentations. 

Our Judges

We have a very distinguished panel of judges for our design challenge.

James Cameron

James Cameron is a Machine Learning Engineer at Patriot One Technologies. He received both his B.Sc.E and M.Sc.E. from the University of New Brunswick. His research interests include computer vision, machine learning, and computer hardware. In his spare time he enjoys music and building gaming computers.

Chen Feng

Dr. Feng's research interests include information & coding theory, wireless communications & networking, cloud computing & big data, and very recently Blockchain technology. In particular, he is interested in adapting new ideas and tools from information theory, coding theory, stochastic processes, and optimization to design better networking systems. The primary goal of his research is to bridge the gap between theoretical advances and system implementations.

Zheng Liu

Zheng Liu received the Doctorate degree in engineering (earth resources) from Kyoto University, Kyoto, Japan, in 2000, and the Ph.D. degree (electrical engineering) from the University of Ottawa, Canada, in 2007. From 2000 to 2001, he was a Research Fellow with the Nanyang Technological University, Singapore. He then joined the National Research Council of Canada (Ottawa, Ontario) as a Governmental Laboratory Visiting Fellow nominated by NSERC in 2001. From 2002, he became a Research Officer associated two research institutes of NRC (Aerospace [IAR] & Construction [IRC]). From 2012 to 2015, he worked as a full Professor with Toyota Technological Institute, Nagoya, Japan. He is now with the Faculty of Applied Science at the University of British Columbia – Okanagan as an associate professor. His research interests include predictive maintenance, data/information fusion, computer/machine vision, machine learning, smart sensor and industrial IoT, and non-destructive inspection and evaluation. He is a senior member of IEEE and a member of SPIE. He is co-chairing the IEEE Instrumentation and Measurement Society technical committee (TC-1). He holds a Professional Engineer license in both British Columbia and Ontario. Dr. Liu serves on the editorial boards for journals including IEEE Transactions on Instrumentation and Measurement, IEEE Transactions on Mechatronics, IEEE Journal of RFID, Information Fusion (Elsevier), Machine Vision and Applications (Springer), Canadian Journal of Electrical and Computer Engineering, Intelligent Industrial Systems (Springer), and IET/CAAI Transactions on Intelligence Technology. He also serves the Standards & Interoperability Committee of the Research Data Canada.

James Stewart

Dr. James Stewart is SVP Video Analytics with Patriot One Technologies, creator of the PATSCAN Multi-Sensor Threat Detection System. James has a background in policing and cybersecurity and joined the Patriot team through the acquisition of his video analytics company, EhEye.

Ken Thibodeau

An internationally recognized expert on electronic records and digital preservation, he has served as guest scientist at the National Institute of Standards and Technology and as director of the Center for Advanced Systems and Technology and of the Electronic Records Archives Program at the National Archives of the U.S.  At the Department of Defense, he led the development of the world’s first standard for records management software, DoD 5015.2-STD.  He also led the Preservation as a Service for Trust project in the ITrust collaboration.