Video of My Talk at ScalaDays 2010
April 30, 2010
Video of my talk at ScalaDays is online. Check it out. Feedback very much appreciated!
Video of my talk at ScalaDays is online. Check it out. Feedback very much appreciated!
I had the good fortune to not only attend ScalaDays 2010, but also to speak (generously supported by my employer). The experience overall was great. I talked to some really enthusiastic and interesting people (including two high school kids from Austria who were using Lift because, after three years of Java, they were tired of it already!)
It was very interesting to compare this conference to a No Fluff Just Stuff conference. At those, the speakers are professional, both in that they are getting paid and that they are just, really, really good at speaking and presenting. Their talks, however, tend to be introductory and not terribly advanced. I think that's fine, these are the type of speakers you want to have when you need to "get the message out" on some new and interesting technologies.
The speakers at ScalaDays, however, were just regular people using Scala and wanting to talk about it. As such, things like presentation style and slides were highly varied in terms of quality. That being said, even the speakers who had the most trouble, still had some interesting things to talk about. Following are my quick thoughts on each of the talks I attended:
His talk was on the new features of Scala 2.8 which, as I said above, are quite exciting. Package objects, new collections library, and default/named arguments are just some of the highlights. All-in-all, some very exciting features.
He had created a way to write Scala code and turn it into JavaScript, much like how GWT does this with Java. His approach wasn't geared around UI widgets, however, but in creating AJAX applications using WebSockets. I've always felt JavaScript was assembly language (which is why I think coffeescript is so interesting), and this was another way to leverage JavaScript without having to deal with it.
This was a preview of a Scala 2.8.1 (or later) feature, where a subpackage of the new 2.8 collections library was created that could perform the collection operations in parallel. Consider a filter on a collection; this is highly parallelizable. This library is being created to basically implement that in a transparent fashion. Really interesting stuff. They had even considered load balancing, via some interesting work-stealing mechanisms. In only 30 minutes, there wasn't a huge amount of detail, but this seems to be another way in which parallel computation could be done very easily.
The concept of "object spaces" never really sunk in with me, but it wasn't actually that relevant to the talk. This was basically a comparison of a Java library and it's straight-port to Scala. As expected, the Scala implementations were all much more succinct and (to me) clear. The speakers hadn't used a lot of Crazy Scala Magic™ to achieve the real code reduction that was shown. This talk made it very clear that even if you write "Java in Scala", you still get a lot of productivity gains.
Software Transactional Memory is one of the go-to features of Clojure. Clojure supports it natively, while Scala does not. This talk was on an implementation of STM as a library for Scala. It seemed relatively simple to use and understand. While it's not as "clean" as the Clojure way of doing things, the speaker made a solid case for a library implementation being a better tradeoff, given the difficulty in adding it to the language at this point in Scala's life (it's easy to forget how mature the Scala language and compiler are, despite how "new" it seems).
This talk proposed a function type called "Translucent Functions" that were better-suited to concurrency than the partial functions used in the actor library. Honestly, it was a bit over my head, and I wasn't able to catch up with the speaker. Perhaps the paper would be more elucidating for me.
This talk was very similar to the one on Scala Parallel Collections. Here, the speaker had created a collection library separate from the Scala one, designed specifically to allow parallel processing. He had a very different approach, using control structures to indicate areas where parallelism could occure. It was pretty interesting, but the Scala 2.8 Collections Library implementation seemed a lot cleaner to me than this.
One of the new features of 2.8 is the ability to "specialize" your code to work with primitive types and avoid boxing/unboxing. This is something Java cannot really do. This talk was an overview of how that worked and what specializations were done to the Scala library. I guess it's nice that this kind of optimization if available, but I think it would have negligible gains for most real-world applications that weren't doing constant number crunching. Even in that case, you don't necessarily need this.
How could I not attend the only session (and mention!) of Monads? The speaker gave a good overview of using Monads to
implement automatic resource management (e.g. File.open { |file| file.readlines.whatever }
). The idea was that since Monads are designed to hold onto the objects they manage, it's a bit inconvienient when dealing with the results of operating on managed resources. Thus, his monads "leaked" their contents to create more concise code.
Akin to LINQ, this talk showed how to write SQL directly inside Scala code, but maintain type safety. It uses a compiler plugin to achieve this, and he talked a lot about how to tell the difference between embedded SQL and regular Scala code. I guess the plugin introspects your database to get the types and generates a bunch of code using some Scala classes that get substituted in during compilation. Pretty interesting. I'm not sure I'm really missing SQL right in my code, but it is certainly cleaner than JDBC.
This was a talk about how LinkedIn is using Scala to glue together some enabling technologies to create their social network graph. It seemed really clean and simple, especially since the underlying technologies are all Java-based. This was mostly an overview of the API and some usage examples. It's open source, and looks pretty interesting. I'm not sure it would be useful for me at my job, but definitely seems like a cool project.
I was looking forward to this one quite a bit, as akka gets a lot of hype on Twitter. I think 30 minutes was just too short, as it was pretty light on details. There was a lot of assertions about Akka that the speakers just didn't have time to elucidate or substantiate. I would love to see a comparison of Scala Actor vs. Akka Actor code and discuss why the Akka way is better. Similarly, I'd love to see the basis for all the assertions on Akka's performance; it certainly gets a lot of hype. Jonas is currently working on publicizing the production deployments of Akka (which would go a long way) as well as establishing commercial support for it (which also goes a long way :).
I went to this purely for my co-workers. I hate IDEs in general, and Eclipse does some pretty evil things to my co-workers, yet many of them still swear by it. At any rate, Miles was almost impossible for me to understand (despite being English!) and most of what I got out of the presentation was that Eclipse was very much designed to work with only Java, requiring some nasty hacks to get Scala working with it. It seems Miles' Herculean efforts have been used to get around this and there now seems to be a nice development environment for extending and enhancing the Scala plugin.
While sbt isn't my ideal build tool, but it's a billion times better than Maven and Mark's talk actually made me appreciate it a lot more. He is unabashedly in favor of terse symbols over long method names, and he spent some time explaining the rational behind them (thus making things make a bit more sense). His assertion is that you will eventually internalize the symbols and achieve productivity gains at a cost of a slight bump at the start. I tend to disagree with this, because how often is one modifying their build? This is why learning and retaining Ant and Maven is so difficult (I can't write a copy in Ant without a trip to the docs). At any rate, this was a good talk and great overview of sbt, and, despite me not being into the crazy symbols, I will take it over Maven or Ant any day. At least it's based in a real programming language and not some XML nonsense.
This was a brief over of Processing, for those unfamiliar with this (horribly‐named) visualization tool and then a review of the Scala bindingns the presenter had created. Interestingly, he created SPDE as an SBT plugin so that users could gain the benefit of SBT's automatic re-compilation and running features. His plugin + a text editor is a makeshift IDE for Processing, which is pretty cool. It was very high-level and didn't get into too much detail, but the Scala code was, as expected, much more concise than the Java code.
I was pretty nervous, as the many of the previous speakers' topics were all quite heady, but I had practiced my ass off, and delivered the talk as well as I could. I think it was pretty well received and got some good questions. I had talked with a few people previously in the conference who were using testing as a way to get Scala into their jobs and a few other people who were interested in what I had to say. Several people were curious as to how the other developers at OPOWER liked Scala. I felt pretty good about it.
For this, we broke into small groups and basically unloaded on a poor member of the Scala team as to how anything related to Scala could be better. Then, the Scala team assembled all of this info while Martin gave a very long term roadmap for Scala. I do worry that the universe will never catch up to Martin and Scala team as they add more and more advanced and innovative features.
After this, a member of the Scala team did a very fast tour of the feedback. I wish the entire list could be posted; there was a lot of very heartening complaints in there that would not be obvious from the Scala mailing list (e.g. hate on the underscore, desire to be less academic in terminology) This fed my (arguably self-biased) impressions that people want to get Scala into industry and that there is a large hole in this part of the Scala universe.
There were also some updates on IntelliJ's support for Scala. They sound way ahead of the curve compared to Eclipse and NetBeans, but I was a bit frightened by IDEA's guy saying that they way people learn languages is to ask for auto-completion and see what's available. I see this trend all to often and I think it's harmful. More on that later.
Despite still being stuck in Geneva, I'm glad I went. Giving my talk was fun, but it was also great to hear about all the many ways in which people are using Scala. I went in expecting massive academia overload, with a lot of theoretical type system and functional programming talk and got, instead, a wide variety of topics and just the right amount of academia (i.e. not that much). Really encouraging. Also encouraging is that ScalaDays 2011 will likely be in the Bay Area, which is a much nicer place to get stuck that stupid Geneva.
Object-Oriented design is hard, especially in a large application. It’s not always clear where logic should go, and there’s often no “right place” to put a piece of code. I’ve found that there are four distinct types of classes that, if you stick to them, can make your code a lot more understandable, and can provide clear direction as to the age-old question of “where does this code go?”
The J2EE way is to have model objects be stupid structs, and have all business logic in a service layer (this is actually very close to a classic “functional programming” way of doing things; ironic that many Java devs eschew FP). Spring lets you do whatever you want, but more or less follows this pattern.
While the “Rails Way” is to put business logic on the model objects, I think the “service layer” concept is eventually going to be common practice.
So, I’ve found that I very rarely make a pure “by-the-book according-to-Hoyle” OO-compliant class; I’ve settled on four patterns that seem to cover pretty much everything. I’ve also noticed that when these patterns get mixed together, you get trouble.
The record is a dumb struct that you usually need to appease your object-relational mapper. You may need them elsewhere to just name and type some set of data that you either can’t model as a tuple because of your language, or don’t want to model as a tuple because of some complexity. A record typically has methods that merely expose it’s contents and often need to be mutable for the reasons stated. You might have derived fields that are convieniences and not based on your core business logic. An easy example is a person. They have a name and a birthdate, and you might derive their age from that:
public class Person {
private String name;
private Date birthdate;
public Person(String name, Date birthdate) {
this.name = name;
this.birthdate = birthdate;
}
public String getName() { return this.name }
public void setName(String name) { this.name = name; }
public int getBirthdate() { return this.birthdate; }
public void setBirthdate(Date birthdate) {
this.birthdate = birthdate;
}
public int getAge() {
// I know this is slightly buggy :)
return (
new Date().getTime()
- getBirthdate().getTime()
) / (1000 * 60 * 60 * 24 * 365);
}
// Maybe some toString, equals, etc. type stuff as well
}
This is the closest to a pure “object-oriented” design. Classes of this type are immutable and should hold data you will use a lot in your system. They may also probably have some business-logic attached as methods; this business logic should be entirely focused on the object and its contents. Typical methods will give you more complex information about the data the object contains, or will vend new objects of the same type, based on the method and parameters called.
This is the most clear distinction (in my mind) between functional programming and object-oriented programming. In an FP world, the data being operated on would be loosely defined (if at all) and you’d have functions that transform it. In an OO world, your object’s data is clearly defined (by the class fields/accessors), and the operations available are the methods of the class. When you require that the objects of the class be immutable, you have a very nice encapsulated package of data and operations. This, to me, seems a lot easier to deal with than a “module” of functions and some tuples (or lists of tuples) that the functions operate on. Scala makes it very easy to create classes like this. It’s probably one of the few languages that does so (Java certainly is no help, but it can be done).
public class Appointment {
private final Date date;
private final String description;
private final Collection<Person> attendees;
public Appointment(
Date date,
String description,
Collection<Person> attendees) {
// normally, you would validate the inputs
// for sanity, e.g. Validate.notNull(date)
// Since Date is mutable, we make a copy
this.date = new Date(date.getTime());
this.description = description;
this.attendees = Collections.unmodifiableCollection(attendees);
}
public Date getDate() {
// Since Date is mutable, we vend a copy
return new Date(this.date.getTime());
}
public String getMessage() {
return this.message;
}
public Collection<People> getAttendees() {
return this.attendees;
}
public boolean isLate(Date otherDate) {
return this.date.before(otherDate);
}
public boolean shouldRemind(Date otherDate) {
return !isLate() && (otherDate.getTime()
- this.date.getTime()) >= (60 * 5 * 1000);
}
public boolean isAttending(Person p) {
return this.attendees.contains(p);
}
public Appointment reschedule(Date newDate) {
return new Appointment(newDate,getMessage(),getAttendees());
}
public Appointment notAttending(Person p) {
if (isAttending(p)) {
Collection<Person> newGroup = new HashSet<Person>(getAttendees());
newGroup.remove(p);
return new Appointment(getDate(),getMessage(),newGroup);
}
else {
return this;
}
}
}
The benefits here are huge; immutability allows your codebase to be much more comprehensible, and allows you to use these objects in concurrent situations without worry. Since they are immutable, their methods are immediate targets for caching if you discover you need to do this to improve performance.
While you can certainly use methods (or create methods) on Immutable Object classes to “build up” the object you want, this is often cumbersome, and results in a lot of object creation for no real reason. The “builder” can be used to make this a bit simpler. The Builder is a throwaway class whose sole purpose is to create Immutable Objects. This obviously creates a very tight coupling between the two classes, but this can be worth it. This is very preferable to a mutable class and, depending on your operating environment, is preferable to making many intermediate objects you will need to create the Immutable Object.
public class AppointmentBuilder {
private Date date;
private String description;
private List<Person> people;
public AppointmentBuilder setDate(Date date) {
this.date = date;
return this;
}
public AppointmentBuilder setDescription(String description) {
this.description = description;
return this;
}
public AppointmentBuilder addPerson(Person p) {
this.people.add(p);
return this;
}
public Appointment build() {
return new Appointment(date,description,people);
}
}
The analog of The Record, the service has no data and all logic. EJBs are services; they have no internal state, operating on their arguments and returning a result. Methods of services can be very functional in nature (operating solely on structs or immutable objects), or they may provide functionality that implements complex business logic not logically part of an immutable object’s class. In a vanilla n-tier application, you use services to get data in and out of your database (you might call these DAOs and you might distinguish different types of services for partitioning, but these are all the same sort of class).
Like records, Services are not OO at all; these are the functions to your C programs structs. But, there is a good reason for this design; you separate concerns, don’t need to worry about concurrency (services have no state), and can even horizontally partition where serivces actually run.
public class Calendaring {
/** Schedule an appointment */
public Appointment schedule(
Date date,
String description,
String... names) {
AppointmentBuilder builder = new AppointmentBuilder(date)
.setDescription(description);
for (String name: names) {
Person p = findPersonByName(name);
if (p != null) {
builder.addPerson(p);
}
}
return builder.build();
}
}
I don’t write a lot of Java in 2017, but I do write a lot of Rails, and these patterns have served me well. Every bit of code I’ve written and watched grow over 4+ years that was disciplined, and followed the above patterns, has been easier to understand and test. Code that didn’t—for example, mixing a record and a service into one class—has been harder to evolve and manage.
Examining the code needed to read a file line by line is a a common way to examine the hoops a programming language makes you jump through. While Perl certainly
has some one-liners for this, let's start with Ruby, which presents an elegant and clear way of doing it:
File.open("some_file.txt") do |file|
file.readlines.each do |line|
puts line.upcase
end
end
Here's the canonical Java way of doing it, complete with plenty of places to introduce bugs:
import java.io.*;
public class ReadFile {
public static void main(String args[]) throws IOException {
File file = new File("some_file.txt");
BufferedReader reader = new BufferedReader(
new FileReader(file));
String line = reader.readLine();
while (line != null) {
System.out.println(line.toUpperCase());
line = reader.readLine();
}
reader.close();
}
}
readLine()
twice kinda sucks. We could use a do-while, but that requires a second line != null
check. Personally, I like to forget the second readLine()
and wonder why my code runs forever :) That being said, this was extremely easy to figure out, even the very first time I did it in 1998. The class names are obvious, and the documentation is excellent.
Scala to the rescue, right?
import scala.io._
object ReadFile extends Application {
val s = Source.fromFile("some_file.txt")
s.getLines.foreach( (line) => {
println(line.trim.toUpperCase)
})
}
scala.io
and, of the few classes that were there (including a curiously named BytePickle
), it appeared as though Source
was the class to use. Of course, there's no easy way to create one from the constructor, and the scaladoc doesn't just say "Dude, look at the Source
object". Once I looked through the Source
object's scaladoc, the solution presented itself.
Of course, unlike every other line-traversing library in the known universe, Source
leaves the line endings on. This is thankfully fixed in 2.8 (by which I mean 2.8 breaks 2.7's implementation, which is a strange thing for a point release to do). The real question is: "Is this how I'm supposed to read files in Scala?". With a class called Source?!
reportError
and reportWarning
. I guess this is only for writing the Scala compiler? If so, scala.io
seems an odd place to put this.
So, my answer is "No, this cannot be how to canonically read files in Scala". Since the Java way kinda, well, sucks, what alternatives are there? There's scalax.io
, which seems to implement this as a class called, curiously, FileExtras
. I'm not sure if this code is actively maintained, but it's documented in classic Scala style: terse and full of loaded terms like "nonstrict". Nevertheless, there seems to be some code here to easily read a file "the easy way" (despite some distracting names).
This points out a big difference between "Scala the language" and "Scala the library". Scala the language is very interesting and has a lot of potential. Scala the library is schizophrenic at best; it's not sure if it wants to be OO, functional, or what. The documentation ranges from sparse to absent, and the overall designs of the classes and package range for sublime to baffling. Years different from Java 1.1.
Got a few questions about how I set up ❺➠.ws (which is powered by Shorty, my Scala-based URL shortener), so I thought I'd write up how I got it working. Short answer is that it was pretty easy.
I got the idea from John Gruber, who made a similar thing for his site to post entries on @daringfireball. The trickiest part was figuring out what this was called so I could find out who could sell me a domain with unicode characters in it.
It turns out, this is called an IDN (short for Internationalized Domain Name), and not everyone will sell you one. Couple that with the need to get a non-.com
domain, and I had to hunt around for a while.
I ended up going with DynaDot as they could provide the wacky hostname that I wanted as well as a .ws
TLD registration. I was amazed at the number of domain regstrars whose web forms could not handle Unicode.
It's been almost 7 years since Joel Spolsky wrote his screed on dealing with Unicode, so I don't know what the deal is.
At any rate, the tricky bit in actually using the domain, because a) entering ❺➠ into vim is nontrivial, and b) I doubt that Apache's config file would work with unicode characters in it. Enter Punycode, which is an asciification of any IDN. Fortunately, the domain host provides the Punycode for your IDN, so configuring apache was a matter of:
<VirtualHost XXXXX>
ServerName xn--dfi5d.ws
ServerAlias www.xn--dfi5d.ws
DocumentRoot /home/webadmin/xn--dfi5d.ws/html
<!-- whatever else goes here -->
JkMount /s* ajp13
JkMount /s ajp13
<Directory /home/webadmin/xn--dfi5d.ws/html>
Options Includes FollowSymLinks
AllowOverride All
</Directory>
</VirtualHost>
At this point, it pretty much worked, although it was sometimes difficult to get curl to work with the non-punyied name.
One thing that was weird was that I found that a lot of domains I wanted to try were taken or not available (with no explanation). Often it seemed like the punycode version was a normal looking URL that was taken; I tried several IDNs that had a unicode character with a wierd "5", and they punyied to an ascii 5. Not sure what the deal is there, but I eventually found the one I wanted.
Been working with Spring MVC recently. My enthusiasm has waned somewhat, as I've discovered that for all of it's tweakability and configurableosity, it omits what I believe to be incredibly obvious things:
foo
method of the class
BarController
when I request the url bar/foo
without a lot of configuration, some of which subtly conflicts
and causes silent failures. The default configuration is uselessFooController
's bar
method, shouldn't I be able to redirect/route to that in code via route(FooController.class,"bar")
, regardless of the specific URL that
FooController
and bar
respond to? Instead, I've got magic strings everywhere and if my urls ever change, god help me. And don't get me
started about accessing this stuff via tests.If you haven't checked out Prezi as a means to create presentations, you really should. They have rethought the entire user experience, and it's totally awesome. To give you a flavor of it, I created a presentation of my blog on Deconstructing Scala's Map Literal. I can't create audio in the format Prezi requires, so there's no voice over, but I think it still works.
I finally got around to finishing Shorty, my url-shortener for my vanity short-domain, ❺➠.ws. I did the whole thing in Scala as a way to create a fully-functining application that I would use and that I could finish in my non-work time. Scala unequivocally made this task enjoyable and quick. J2EE, on the other hand, did not help one bit.
My Scala code is so much shorter and easier to follow than the Java equivalent. Consider this code that, given
the request path, finds a controller to handle it, and then calls the appropriate method based upon
the HTTP method:
route(path) match {
case Some(controller) => {
val result = determineMethod(request) match {
case GET => controller.get(params(request))
case PUT => controller.put(params(request))
case POST => controller.post(params(request))
case DELETE => controller.delete(params(request))
}
ScalaTest resulted in a lot more readable code than JUnit or TestNG would've. Because of Scala's syntax, the tests are also free of weird dots and "literate" syntax that
isn't quite that literate.
it ("should respond to get for a URL that is known") {
val controller = new OneUrlController(hasher,"738ddf")
val result = controller.get(Map())
result.getClass should equal (classOf[URL])
result.asInstanceOf[URL].url should equal
("http://www.google.com")
}
I really wanted to like SBT, and, while it's a billion times better than maven, it's still not as easy to use as I'd like it to be.
I like:
While SBT is light-years ahead by using an actual programming language, I found it very difficult to customize. Part of this is that the scaladoc tool gives developers no help in documenting their API, but, when it comes down to it, Scala and Java are not system automation languages.
Scaladoc is nowhere near as powerful as Javadoc. It makes it very hard to document how to use your code. Scala should have a more advanced documentation system than Java, but it actually has a much more primitive one; even RDoc is better. Hopefully, as Scala's popularity increases, the tools surrounding it will improve.
Deployment is an underappreciated aspect of why Rails is so easy to use; copy/push your code to the production server, tell it you are running in production, and go. With J2EE, you get NONE of this.
If you want to alter configuration based upon environment, you are entirely on your own. J2EE, Ant, Maven, and SBT give you no real help or support; you have to roll it yourself. I'm just shocked at this omission; J2EE is ten years old and still has not provided a solution for something that every project needs. Amazing.
Java 5 is at end of life. The latest released Servlet Spec still doesn't support generics and is still completely schizophrenic about it's API (some methods use Enumeration, some use arrays, some use Iterable. Ugh).
The 3.0 spec looks slightly more sane, but it really doesn't do us any favors. web.xml is a trainwreck of stupidity, there's zero support for conventions, and the whole thing just feels like a solution designed for a problem that few of us ever have.
class TwitterUser(val username:String) {
def url = "http://www.twitter.com/" + username
def recentTweets = // imagine some code here
}
val me = new TwitterUser("davetron5000")
assertEquals(10,me.recentTweets.size)
val fake = new TwitterUser("davetron5001")
assertEquals(0,fake.recentTweets)
// Along with our TwitterUser class def
object @@ { // "@" is a reserved word :(
def apply(username:String) = new TwitterUser(username)
}
// back to our test code
val me = @@("davetron5000")
val fake = @@("davetron5001")
// This replaces the TwitterUser class
// and @@ singleton object
case class @@(val username:String) {
def url = "http://www.twitter.com/" + username
}
user match {
case @@("davetron5000") => "it's you, dude"
case user:@@ => "it's someone else"
}
I find that Scala is one giant Rube Goldberg Machine that manages to do something not easily be done otherwise. By this I mean that Scala has many features that, by themselves, seem very strange, but, in combination, enable some very cool functionality. This is why I initially started my personal tour of Scala. I read stuff like explicitly typed self-references and was left scratching my head.
I thought it might be fun to deconstruct the "map literal" in Scala and observie how the features interact to create a very handy piece of code that isn't baked into the language. This assumes and understanding of some Scala basics.
Although Java 7 is getting map literals, Scala already has it (or so it appears):
val band = Map("Dave" -> "Bass",
"Tony" -> "Guitar",
"Greg" -> "Drums")
Most surprising to a Java programmer is the -> operator. This makes use of two Scala features:
It turns out that the -> operator is on the class Predef.ArrowAssoc. Predef is automatically imported in every Scala program, so you don't need to prefix anything with Predef. It returns a tuple of its caller and its argument, e.g.
val dave = new ArrowAssoc("Dave")
val entry = dave -> "Bass"
// entry is now ("Dave","Bass")
// which is a Tuple2[String,String]
Of course, we aren't creating ArrowAssoc instances anywhere, so how does this get called? This is where implicits come in. Suppose we change our simple example to:
val dave = "Dave"
val entry = dave -> "Bass"
// entry is still ("Dave","Bass")
// which is a Tuple2[String,String]
implicit def any2ArrowAssoc[A](x: A):
ArrowAssoc[A] = new ArrowAssoc(x)
This means our code is now effectively:
val band = Map(("Dave" , "Bass"),
("Tony" , "Guitar"),
("Greg" , "Drums"))
This use two additional Scala features:
This is much simpler to decode than the -> method; there is simply an object in scope named Map, and it has an apply method that takes a variable list of Tuple2 objects. Scala interprets a method-call syntax on an object, but lacking a method name, as a call to the apply method of that object (if it exists). So, removing this, we have:val band = Map.apply(("Dave" , "Bass"),
("Tony" , "Guitar"),
("Greg" , "Drums"))
That's all there is to it! A few things to note about this: