BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
X-LIC-LOCATION:Europe/Stockholm
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20220812T074335Z
LOCATION:Singapore Room
DTSTART;TZID=Europe/Stockholm:20220629T140000
DTEND;TZID=Europe/Stockholm:20220629T143000
UID:submissions.pasc-conference.org_PASC22_sess155_msa205@linklings.com
SUMMARY:Machine Learning at Scale using ALP/GraphBLAS
DESCRIPTION:Minisymposium\n\nMachine Learning at Scale using ALP/GraphBLAS
\n\nYzelman\n\nWith Algebraic Programming (ALP), primitives require explic
it algebraic structures given by programmers when given data-centric opera
tions on containers. When considering linear algebraic relations and the d
uality between graphs and sparse matrices, GraphBLAS emerges as an algebra
ic programming model for high-performance scalable graph computing.
<
br />Our open-source ALP/GraphBLAS framework allows for the automatic vect
orisation, fusion, and parallelisation of algebraic programs over shared-
and distributed-memory parallel architectures. It includes performance sem
antics that allows for the systematic characterisation of workloads behavi
our at scale, while also helping algorithm designers to implement algorith
ms in the most performant possible way.
After introducing our f
ramework, we discuss modifications that allow covering computations beyond
graph and sparse matrix computations. These modifications span two direct
ions: 1) extensions of the supported algebraic structures, 2) addition of
novel primitives, and 3) exposing non-algebraic programming interfaces tha
t compile-time translate into ALP programs. Concretely, we discuss extensi
ons to cover dense linear algebra, traditional ML workloads, as well as ve
rtex- and edge-centric programming. Some of these extensions tightly inter
act with compiler technologies, here touching on just-in-time compilation
and the use of existing and novel MLIR dialects.
The talk close
s by demonstrating the resulting ALP framework on various workloads and di
scusses their performance at various scales.\n\nDomain: Computer Science a
nd Applied Mathematics
END:VEVENT
END:VCALENDAR