We created a four days agenda with different timetables, thinking about an international speaker and attendance community.
All the times are based on Coordinated Universal Time (UTC)
Scala 3 will introduce a new macro system. This talk will be an overview of what is new, what was left behind and how to handled Scala 2 to Scala 3 macro migration.
New stuff: - Simpler and safer expression-based macros - New AST reflection API aligned with TASTy - Define macros in the project where they are used
Dropped stuff: - scala.reflect API - Annotation macros
Migration - Overview of current projects supporting new macros - Migration strategies
Scala.js is often described as rock solid software, and with reason: the number of open bugs can usually be counted on one's fingers.
Until recently, though, there were three areas of the Java libraries that were known to be broken, without any fix in sight: parsing floats, formatting floats/doubles, and `java.util.regex.*`.
In this talk, we present those three problems, why they are hard, and how we finally fixed them.
It depends. As with many everyday problems - here, the task is exposing and interacting with a WebSocket endpoint - there's a couple of approaches one may take. Even if we restrict ourselves to functional programming, there are still choices ahead of us!
In this live-coding talk, we'll implement a functional-declarative streaming websocket server, using the tapir, http4s and cats-effect libraries. Then, we'll consume the created endpoint using a functional-imperative websocket client, this time using the sttp client and ZIO libraries.
How readable is the resulting code? How much boilerplate do we have to cope with? Does functional programming offer any benefits comparing to other approaches? Let's figure this out in this session!
Scala 3, also known as Dotty, is the next version of the Scala programming language, bringing many new features, reducing boilerplate and removing warts.
In this talk, Jamie will show you work being done at the Scala Center to help the transition from Scala 2 to 3: how we are utilising the TASTy interchange format to bridge the gap between their two compilers to ensure binary compatibility under separate compilation; how you can gradually migrate your Scala 2 projects by depending on libraries that have already transitioned to Scala 3; and finally the steps you can take to migrate your own code to the new compiler.
After 8 years of work, 28,000 commits, 7,400 pull requests, 4,100 closed issues – Scala 3 is finally out. Since the first commit on December 6th 2012, more than a hundred people have contributed to the project. Today, Scala 3 incorporates cutting edge research in type theory as well as the industry experience of Scala 2. We've seen what worked well (or not so well) for the community in Scala 2. Based on this experience we've created the third iteration of Scala – easy to use, learn, and scale.
In my talk, I'll want to cover three main points:
- Give an overview of the motivations behind Scala 3 and its creation history.
- Give pointers on how to get started with it, and where to find more information for getting up to speed.
- Give some indications what will follow, and how we envision future releases to be developed and shipped.
Algebraic Data Types are a very simple, yet very powerful tool to use when designing systems. Most developers are familiar with them, or subsets of what we call ADTs, even if they are not aware of them - enumerations, for example, or records.
The purpose of this talk is to clarify what ADTs are, what properties they have and how these properties can be used to express strong invariants at the data level - such as making illegal states or state transitions impossible to represent.
It also explores the generalised form of ADTs - GADTs - and attempts to lift some of the confusion that surrounds them in the Scala community.
We will also (lightly) tackle the theory behind them and try to understand where the “algebraic” part of the name comes from.
By the end of the talk, attendants should have a solid intuition of when and how to use them, and be able to bring them to use in their own projects directly.
In this talk, we will introduce you to the wonderful world of tensors, that are powerful mathematical objects. As they can easily represent various data models (relational, graph, etc.) and leverage multi-dimensional analytics, they show their advantages in a wide range of domains such as in brain signal analysis, computer vision or social network analysis.
Some libraries only use tensors for their modeling capabilities, and others stick to the mathematical point of view while neglecting data. It induces bad coding habits as well as error-prone preparation steps.
With TDM, we aim at providing a data-centric tensor library, relying on Spark for large-scale capabilities and on shapeless to implement fully type-safe tensor manipulation operators. TDM contains advanced analytics operators, such as tensor decompositions, to extract value from multi-dimensional data. We will present the library, its mechanisms and capabilities, and we will show some examples of its uses.
Imagine that you’re driving on a racetrack at night. There’s no light, and it’s pitch black. There’s a light at the beginning of the course, and a light at the end. The rest of the course is dark. When you say you can’t see where the road is, you’re handed a map of the course. But don’t worry. When you run off the road and into the bushes, you can put a light where you ran off the road so that the next time you try the course, you can see where the road is.
This would be a crazy state of affairs, but it’s how developers are expected to write and debug code these days. We write code with no debugging statements. When we run into a problem, we’ll put debug statements (or breakpoints, if you’re lucky enough to have a local dev environment) where we think something went wrong, and keep doing that until we have a rough idea of what’s happening.
This is entirely backwards. Code maintenance involves reading and understanding code that you didn’t write, and seeing how that code operates in different environments. In production, metrics and error reporting are critically important, while in testing and development, clarity of code flow and data flow is essential. Every single part of the code should be observable by default.
Operations has already realized the need for observability to answer questions about errors in production. Structured operational logging, distributed tracing, and metrics are all commonly used: but there’s a catch. These tools are typically only used to determine and resolve errors and issues in latency. Operational logging capturing an error doesn’t tell you what led up to it. Opentracing instrumentation is typically not fine-grained. Metrics are aggregated and low-cardinality. As developer tools go, they don’t capture the flow of data – they cannot catch logic bugs, they will not handle database corruption or memory leaks, and they typically do not capture changes over time. Worse, these tools are typically aggregated and sampled in production so rare bugs are harder to see.
There is a solution to this: context aware diagnostic logging. Given a sufficiently advanced context, we can determine when to log information at debug or trace level. And between Scala’s implicit support, rich language features like macros for line numbers and type classes, we can provide lightweight logging and instrumentation that aligns with your code so that observability can be turned on and off for a particular flow or a particular user.
Research shows that on average developers spend about 58 percent of their time on reading code! However, we are not explicitly taught reading code in school or in boot camps, and we rarely practice code reading too.
Maybe you have never thought about it, but reading code can be confusing in many ways. Code in which you do not understand the variable names causes a different type of confusion from code that is very coupled to other code. In this talk, Felienne Hermans, associate professor at Leiden University, will firstly dive into the cognitive processes that play a role when reading code. She will then show you theories for reading code, and close the talk with some hands-on techniques that can be used to read to any piece of code with more ease and fewer headaches!
At VirtusLab, we deeply care for the future of Scala and its ecosystem. With Scala 3 we have moved this effort to the next level by building a full team to work together with Martin, EPFL, and Scala Center and are trying to make Scala and its ecosystem friendly, inclusive, and productive. In this talk, we will share current developments, plans, and challenges to give everyone a sneak peek of what is coming. We will share how we are going to support companies and organizations that are willing to migrate to Scala 3 and what we think are the crucial areas that are slowing Scala adoption in the business world.
These days, full-stack development in Scala is a widespread reality. But Scala 3 makes it exciting again, and simpler, without disrupting established practices. In this talk, we walk through building a simple, yet not simplistic, full-stack application in Scala 3. We use both libraries that have already been published for Scala 3, like circe, and libraries that have not, like akka-http and scalajs-dom, through Scala 3's interoperability with Scala 2.13. We combine them in an application with shared data types across the client and server, entirely written in Scala 3.
Scala 3 introduces a number of language features designed to enable safe, succinct, and efficient type class derivation. These features are of necessity very low level and were designed with a view to being built on by higher-level libraries which have better end-user ergonomics. shapeless 3 was co-designed with these language features, and in this talk I will give an overview of how it builds on them to provide a type class derivation model which improves on shapeless 2 both in terms of expressivity and compile-time and runtime performance.
There’s a joke that a framework is a product with the business logic removed, but all the assumptions left in. In this talk we’ll explore an alternative: solving the most common enterprise issues with the composable ecosystem of Typelevel libraries, avoiding not-invented-here (NIH) and lock-in.
We’ve been involved in many enterprise projects, and have seen many recurring patterns implemented in an ad-hoc, fragile way. At the same time, and more often than not, any custom libraries rapidly develop “bit-rot”, creating a maintenance burden. Instead we can be brave! We can avoid the sunk-cost fallacy by refactoring and retooling using powerful and composable abstractions. We can also trust we are reducing our risk because of the high quality of these open source projects.
We’ll dive deep into how to use the Typelevel stack to handle common tasks like data validation, integration with data sources and sinks, the modeling of actions associated with state machines, and more. We’ll be using libraries like cats, cats-effect, refined, and more, to fix and extend existing systems to be more powerful, safer, and ultimately more maintainable.
This talk will cover the basic types of types in Scala 3, including topics such as nominal, structural, singleton, refinement, higher-kinded, parameterized, bounded, abstract, path-dependent, sub-, super-, union, intersection, and opaque types, and touch on variance to boot.
Automated asynchronous execution; caching; a bitemporal data store; distribution; dependency tracking - some of the core features of our platform, all accessed by our users with the simple addition of just 5 characters - @node. Built by diverse and talented developers based on ideas from the Scala community, 10 years and 7 million lines of Scala in the making, we provide a programming framework that truly separates business logic from execution concerns. Join us to hear about how we are broadening our engagement with open source, and how we built on our local volunteering efforts to find talent in overlooked communities, and ultimately hire nearly 100 developers in Ghana.
Most people don’t go into work excited to update their old code to slightly newer versions of APIs and figuring out what’s replaced this. This is complicated in Spark where the new version of Spark will be dropping support for older language releases. This talk will explore how we can use tools to semi-automatically upgrade our Scala Spark code. While we look at the tools we’ll talk about limitations in the current tooling and how that impacts what we can do with auto upgrades.
We’ll wrap up with talking about how to test that your upgraded Spark code is correct.
With one of the largest movie and TV catalogs in streaming, it’s imperative that our recommendation system matches the right content to the right user in real-time. As our content library and user base evolves, it’s also critical that we’re able to rapidly iterate and improve user experience through experimentation. With the unique advantages of Scala, Akka, gRPC and ScyllaDB, we’ve been building a low-latency, scalable machine learning platform with a fraction of the staff it takes to build/maintain similar platforms at Uber, AirBnB, and Netflix.
In this talk, we’ll describe how Scala made it easy to create flexible domain models for serving recommendations and describing A/B test experiments. With Akka Streams, we’ve been able to create services that scale easily and are easy to understand/maintain. Using ScyllaDB, we’ve drastically improved latency while simplifying our architecture by removing complicated caching services and layers. We’ll outline our machine learning platform and take a peek under the hood of our experimentation engine, batch recommendation service, and real-time inferencing services.
It is nice to start simple and embrace YAGNI (You Ain't Gonna Need It), but sometimes you realize later that You Errr Gonna Need It. Of course there is always a balance between paying the price up-front vs paying when you do need it. In Scala 3 there are now a number of things that have become easy enough that they are worth paying for up-front, including Effect Systems, Observability, and Serverless Compatibility. In this talk you'll learn about how you can embrace these up-front, to avoid higher pain & costs later by using technologies including ZIO, OpenTelemetry, and GraalVM Native Image.
Python is the dominant language for data science today with a plethora of machine learning and scientific computing libraries. Scala, on the other hand, is the dominant language for big data processing. What if we could bring these two worlds together?
ScalaPy enables Scala applications to use Python libraries with a seamless interop layer. With support for core Python features including native bindings, ScalaPy can be used anywhere from training neural networks on GPUs with TensorFlow to making astronomical calculations with Astropy. In addition, ScalaPy supports creating type definitions to enable type-safe interactions with Python libraries. In this talk, we’ll explore how ScalaPy works and how it can be used in different applications. We’ll also look at support in environments like Jupyter notebooks and ways to optimize interop performance.