Posts tagged reviews

ExigoSource Review

ExigoSource is a subscription service that does research for Amazon resellers. Basically they hunt for arbitrage opportunities between Amazon and other online vendors, upon which you could capitalize.

Not sure how well-known it is - I heard about it somewhere on Quora.

Here's a review after a few days of use.

Signup and pricing

Pricing is $99/month for members. The website claims to have just a limited number of memberships, and will have you sign a form to join a waiting list.

In about 12 hours from when I got onto the waiting list, I was offered membership.

So this might just be a marketing gimmick, or there's a very high turnover rate.

Layout of the service

There's one area with a few guides, which aren't anything really special. They suggest you buy a label printer, etc.

The main draw of the website is the dashboard. It's clean and gets right to the point with the information you need.

The header of the dashboard (really it's a spreadsheet) describes all the metrics - entry data, product image, link to the product on Amazon, category, retail price (what you can buy it for someplace on sale), Amazon's lowest "Fulfilled by Amazon" price, Amazon sales rank, ROI as a percent, estimated monthly sales, net profit, weight and dimensions, then links to the item's price history on Amazon.

As you'll see as I walk through an example case, though, the metrics aren't entirely forthcoming...

Example

Here's a reselling opportunity from 9/17/2016 that was presented on ExigoSource. I'm going to blur the specifics so as not to compromise the privacy of the site.

exigosource-case-1_01

Now, ignore the ROI and net profit seen in the screenshot because those are simply taking the lowest price on Amazon and subtracting the current retail price at Walmart.

Price at Walmart = $7.39 per unit

Lowest price on Amazon = $9.96 per unit

Let's look into that Amazon price. The first thing you'll notice is that Amazon isn't directly selling this toy at all. Anyone that wants to purchase (sales rank was in the 10,000s so people were buying) has to do so through independent Amazon sellers.

exigosource-case-1_02b

Diving into that $9.96 figure, we'll see that the vendor with that price is not using Fulfilled by Amazon and is therefore charging shipping. So, with FBA, what we could reasonably charge people is $15.74 like everyone else is.

exigosource-case-1_03

Reasonable price on Amazon = $15.74

Now we'll analyze the $7.39 we were quoted on Walmart. Say we plan to buy 50 units.

The total cost, since shipping is free, ends up as $399.06 with taxes included. However, maybe $399.06 in dollars at Walmart isn't necessarily $399.06 in cash.

What do I mean? We could buy a gift card for Walmart at less than face value. I used Gift Card Granny, which searches many gift card resale sites, to figure out we can save 3.14%.

exigosource-case-1_04

Saving 3.14% on $399.06 means our total for 50 units is now $386.53.

Real price at Walmart = $7.73 per unit

Now, we can't ship directly from Walmart to Amazon. In order to use Fulfilled by Amazon there's some minor prep work that Walmart warehouse employees won't do for us, like barcode-labeling the boxes.

This creates a variable in the sales equation that's difficult to gauge - how much prepping and shipping from us to Amazon will cost. For simplicity's sake I'll give a flat rate of $50.00 (or $1.00 per unit).

Real price to us = $8.73 per unit

Throwing some figures into Amazon's FBA calculator...

exigosource-case-1_05

After Amazon costs we're left with $9.31 a unit, and we know each unit costs us $8.73, so that's a lousy $30 profit for all this work. We also put over $400 at risk to make it happen.

What kinds of goods are researched?

From what I saw over a ~3 day span, kids' toys and baby products. If you're on some kind of list for doing weird things with kids and/or babies, you probably don't want to be taking these shipments at your house.

When I'd go to crunch the numbers, everything seemed to turn out like the case above with low profit per piece. You'd need to buy hundreds of these items to make anything lucrative.

Bottom line

ExigoSource offers a 3-day refund from the time you sign up. As you can tell from my writing above, I wasn't really enthusiastic and ended up getting the refund.

I will say their customer service is fantastic. Turnaround was very quick, no hassle.

ExigoSource overall is an interesting concept because I think their research is done by proprietary scraping software. If you saw my Scrabble Boggle bot, you know I'm into writing software to make money. Something along these lines may be a future project.

Is Uncubed Edge Any Good?

In an earlier post I mentioned that I'd signed up for Uncubed Edge, which at that time I grouped into the same realm as Treehouse and Code School. However, after going through the content, it's really not in that same group. Uncubed Edge Screenshot with Contently

Treehouse vs. Code School vs. Uncubed Edge

They're all online means of learning tech topics. You could go as far as to call them "schools." Treehouse shows you how to do specific things with web development. Code School shows you how to do specific things with, more particularly, programming. Uncubed Edge shows you how certain startups achieved something, with the intention of helping you achieve that with your startup. Here are examples from each just to illustrate - Treehouse: Build a Fully Functioning Web Application with AngularJS, Build a Choose-Your-Own-Adventure Story App on Android, Use Python on the Web with Flask ... Code School: JavaScript Best Practices, CoffeeScript, Ruby Bits, Testing with RSpec ... Uncubed Edge: How Contently Built a Large Audience, How Gilt.com Became Responsive and Adaptive, How Gilt Saves Hundreds of Hours with MetaSQL ...

Is Uncubed Edge any good? Worth the $19/month?

While I've enjoyed the content so far, I'm hesitant to say it's worth $19/month for new users. If there's one 30-minute lecture released each week, that's just 2 hours of new content per month. The worth of that seems closer to the $5/month rate I was grandfathered at. If you asked me if Treehouse is any good and worth the money, I would tell you it's good and worth $25/month. At $50/month they give you a lot of conference content, probably also worth it, though I've only been a silver member. As Uncubed Edge grows, it could get to the point of being worth the new member price.
Image credit - twentydollarbill.info
(Image credit - twentydollarbill.info)
 

What's in each "class"?

Here are my notes from the early courses if you're curious as to what they actually entail. This could help you forge a purchasing decision. Though if you do sign up, I recommend still watching these in their entirety. Here I'm really just skimming. Note that they're also brief - each class/lecture being an average of 30 minutes altogether. Uncubed Edge Dashboard

 Building a Large Audience (with Contently)

Taught by and credit to - Shane Snow, Joe Lazauskas 3 main strategies to grow their publication / audience:
  1. Data-driven content creation
  2. Original research
  3. A/B testing (split testing)
Data-driven content creation
  • Google Analytics - wasn't possible to see whether the user was scrolling, highlighting, whether they had  YouTube open in another tab...
  • Get a user to spend at least 3 engaged minutes with your content, there's an over 50% likelihood of them coming back in the next week.
  • Built their own analytics product (Contently Insights) which essentially measures how engaged users are with the content.
  • For example, there's a running aggregate timer for each user. Time each individual user spends on each story - what they care about.
  • You can write things that get a lot of people but don't hold their attention and build a relationship. (Reach vs. building an audience)
Contently Insights Screenshot
(from Contently Insights)
  • Insights proved better than Google Analytics when it came to measuring growth, audience building, future success pieces.
  • Sometimes an in-house solution for metrics can be best.
  • Getting more data-focused about content on the publication helped build an audience quickly.
Original research
  • They were writing interesting things, but not introducing "really new" content to the Internet.
  • With original research, you're producing content people will quote and recite.
  • By introducing the research Contently also became applicable to journalists. Then journalists are pointing to you.
  • "There's going to be research." Then captured emails - if you wanted to get the research report, had to submit your email.
  • Siphoned off audience from all the places writing about them.
  • Email is still the most important tool you have as a publisher. It's a relationship you own, and it's the spark that starts the fire of sharing.
  • Here's one of their studies on disparity between "sponsored content" on social media and organic content:
Contently Sponsored Content Deception
(from Contently)
  • That was just introducing new research on the back of old research. Proved extremely popular.
  • If you think a piece will go really big, it may not be worth collecting emails. What you get may not be what you truly want.
  • OKCupid's Trends blog - not targeting online daters, but reporters who'll write about their stuff...
  • Everyone reads that -> OKCupid publicity
  • When people write about you, they inevitably mention what you do.
  • By doing research yourself, you're providing something genuinely new to the Internet. You're making the news.
A/B T testing (Split testing)
  • Contently conducts heat map testing (i.e. Inspectlet).
  • Split tests headlines - ultimately finding a permanent one.
Contently Split Testing Screenshot
Example of split testing headlines
  • You can "feel" like something's a good post or good headline, but without data it's negligible.
  • Businesses like Huffington Post were built on split-testing.
  • As a result, The Onion writes 25 headlines for each story to figure out which one's the best.
  • Upworthy is the fastest growing media company in the world and does the same thing, with about a dozen options.
  • Testing email subject lines is easy with Mail Chimp. Allows you to send out to just 10% of audience, for example, then gauge results.
  • Headlines, email subject (via open rate or click rate), images - split test everything you can.
  • Often A/B test results will surprise you. A gut feeling isn't nearly as powerful as data being tracked progressively.
  • You always want to be using your content resources in the most efficient way possible.
Contently Twitter Ad Split Testing Screenshot Closing When you add everything together - getting more out of your Twitter ads because of split testing, achieving a 2% better open rate - you're building a better retained audience. It's how Contently grew its readership from 20K dedicated readers to 100s of thousands. (Contently Strategist)
Contently Uses SumoMe
Also they apparently use SumoMe.

MetaSQL (with Gilt)

Taught by and credit to - Igor Elbert MetaSQL is "SQL that generates SQL" - comes in handy for writing generic functions, working on virtually any table. Setting up MetaSQL
  • Was using Aqua Data Studio, loaded with U.S. Census information, for demo.
  • MetaSQL can be applied to any SQL server.
  • Instead of writing SQL to get min, max, and average of all columns you can write SQL to write that SQL. (Meta)
  • Here's a MetaSQL snippet and the 10 lines of SQL it outputs:
Igor Elbert MetaSQL Screen 1
  • Writing that by itself, statement by statement, would have been long and tedious.
Iterations to improve
  • In MetaSQL, with every iteration you can make it better and better.
  • Once done you can use it on any table, any number of attributes, anything.
  • Becomes universal procedure other people can reuse without knowing how it's done.
  • If you replace a specific table name with a universal operator, MetaSQL will go ahead and do the work on all tables in the database.
  • You can make your MetaSQL "smarter" much like you would make SQL smarter - "if column is ____, don't treat data as numeric", etc.
Immediate insights
  • The next person using your MetaSQL would just put their table name in, and they have an immediate insight without the legwork.
Letting MetaSQL do the work
  • Expanding on this whole idea - say you try to sell 1000s of products, some didn't sell.
  • Now you want to go through all product to find out why. (Univariate analysis)
  • Manually you would have to go through maybe 100s of attributes - i.e. "what's the average price of products that sold vs. unsold?"
  • Then you might say "oh that didn't differ enough", then compare the next attribute, over and over for days or months.
  • Generate MetaSQL that writes SQL to go through every attribute, compare in matrix one group vs another, and single out where there are differences that are statistically significant.
Igor Elbert MetaSQL Screen 2 Closing
  • This all falls under "lazy" programming principles - don't repeat yourself, save time, work efficiently.
  • With MetaSQL you can write something to do your SQL writing, the time-consuming and tedious.

Transitioning to a Responsive, Adaptive Site (with Gilt)

Taught by and credit to - Greg Mazurek Journey from a legacy site to responsive site. Migrating Desktop to Mobile
  • There was a mobile version of the Gilt site ("m.gilt.com") from when mobile browsing was primarily just Blackberry.
  • That was an MVP, barebones shopping, product pages were static, no Javascript, minimum CSS.
  • Next there was a client app built on BackboneJS. A lot better but mobile was still being considered an afterthought.
  • Slowly, and then very quickly, that client app couldn't keep up with the new features on the core website.
  • The core website, obviously, started with the intention of maximizing user experience on the desktop.
  • So Gilt took a transitive period of time to shift the core website into a universal web experience across devices.
Responsive and adaptive design at Gilt
  • Responsive = changes in the width of the viewport
  • Adaptive = changes in the features available to you
  • Overall you want to maximize user experience regardless of the user's device.
  • When the user lands on your site, you need to maximize experience for that viewport. (Responsive)
  • No one just sits there, changing the width of their viewport back and forth all day.
  • Maybe an iPhone 5 has different touch gestures than an early Samsung Galaxy - must be considered. (Adaptive)
  • As a desktop experience is transitioned to responsive, the code becomes more complex. There's no way around that.
  • Take care to craft maintainable code, something an engineer walking in off the street could work on.
  • When Gilt receive a user agent, they send it into one of 3 "buckets" - minimal, intermediate, or full.
  • Full experience is everything - like desktop. Minimal might be on a low-bandwidth network or older browser - capabilities pared down.
  • For Gilt it's important to know which devices customers are most using, then emphasizing on making a beautiful experience for those.
Adaptive design and target experiences This is Scala (and it looks like he's using Sublime Text): Gilt Target Experience Handlebars
  • Gilt has Handlebars references to the minimal target experiences (from the above image).
  • Through those references you can write DOM features for when the device meets certain levels of target experience.
  • It's important that devices don't take any DOM or payload they don't need. That's why this is done the server-side.
  • On low-bandwidth networks, Gilt frequently excludes all tracking. Tracking code can be heavy and degrade from user experience.
  • Obviously that doesn't help the marketing department but it does help the members.
Javascript
  • Target experience gets appended to the window object.
  • This allows for client-side visual manipulation, while the Handlebars was dealing server-side.
  • In the Javascript you can have different functions depending on which experience is coming through.
Gilt Javascript  
Pseudocode
Pseudocode
Applying to the carousel
  • On a desktop Gilt product page you have a lot more capabilities. You can zoom and quickly click through a product image carousel.
  • On an iPhone, you can swipe through images via touch. Giving a similar experience to desktop but things had to be coded differently.
  • This is an example of target experience use over "hope one size fits all" approach.
Responsive design and LOSA
  • Media queries are great building blocks, but only great if they're used consistently.
  • LOSA = lots of small applications
  • Instead of focusing on monoliths, Gilt decided to focus on services.
  • LOSA at Gilt means most of the pages you see are largely driven by individual apps.
  • The Product Details page can be deployed independent from the Product Listing page, for instance.
  • This promotes autonomy in terms of teams and features.
  • This minimizes the chance of "side effects" when releasing code.
CSS and RespondJS
  • Gilt has tens of thousands of lines of CSS. How do you organize that?
  • Originally media queries were written underneath the code they were superseding. Fine if you have a small amount of code.
Gilt CSS JSBin
JS Bin example
  • But this becomes confusing once the code grows, things are really amped up, there are a lot of breakpoints... unwieldy.
  • Then Gilt came across RespondJS (polyfill) - where to put media queries in CSS was no longer a dilemma, especially with old IE support.
  • If someone came in to make a change on the old jumbled CSS, maybe they wouldn't think about all the references of everything going on.
  • The idea is to minimize opportunities for human error.
  • In-line media queries > separate blocks
Pictures and images
  • Images are heavy, especially for mobile, and can be problematic if not handled well.
  • Gilt uses large images which creates a dilemma - they want beauty but at a small cost. (Resource cost)
  • Their handling has images being passed down as they're needed, and at particular break points for different sizes.
Gilt Javascript Image Handling
  • They use PictureFill. You don't want to send a huge image to a small device - that's an unnecessarily big payload to scale down.
Testing and QA
  • User agent emulation in Chrome and other browsers is really useful, obviously something that couldn't have been done 10 years ago.
  • Have to be aware, though, that if you're emulating a user agent in Chrome you're still on the Chrome browser.
  • Gilt uses Genymotion and other tools to simulate different browsers / OS.
  • Want to make sure changes you make in one area won't adversely affect others.
  • Throttle test different connection speeds with something like Charles - what if someone's on the highway in a remote area?
Closing
  • Gilt transition started technical, then focused on QA and bringing design into the picture.

You can sign up for Uncubed Edge here.

 

A Brief Recap of Uncubed NYC

Today I had the pleasure of attending Uncubed NYC... for about 2 hours. St. John's sponsored me to go, so it was free. Ironically classes kept me from spending as much time there as I would've liked. This is a brief recap. Uncubed Name Tag The skills track was scheduled to start at 9. I showed up at 8:45 but accidentally got in line for a women's Zumba class with some other guy. One of those situations where you see a line of people near an event you're attending, assume it's for that. "Are you going to be joining us today?" they asked. Yes, we assured them, we've got our tickets and everything. So... by the time that was figured out and I got into Uncubed, it was maybe 9:05. Check-in process was smooth, all tickets were printouts with QR code. Easy enough. Uncubed NYC Ticket The decor was all old-school video game themed. Not sure how that fit the event per se, but I got to play Duck Hunt. An ideal setup with NES blasters and tube TVs. Uncubed Duck Hunt The first talk in the skills track was really interesting. Titled Breaking Bad vs. Superman: Applying Data Science to Our Passion Projects and presented by Irmak Sirer, the root of it was about quantifying human reaction. For that one I took a lot of notes so will give a rundown.
Uncubed Randy Gingeleski
Me there. From Twitter.

 Breaking Bad vs. Superman: Applying Data Science to Our Passion Projects

Metis Data Science Bootcamp (Flyer)

  • 96% of data is from the last 2 years.
  • It's projected that by 2017, there will be 150,000 data science jobs with no one qualified to fill them.

Irmak's Data Science Passion Project

  • Is my reaction to a movie predictable?
  • Star Wars as an example. Discusses how his own rating to the movie actually changed over time (like age 9 vs. 13 vs. 30).
  • Suggesting that movie reactions are difficult to predict.

Netflix

  • 2006 - formulates Cinematch algorithm, which utilizes the > 1 billion ratings on Netflix at that time
  • Do you have a "soulmate" in taste?
  • Perfect soulmate - rates everything exactly the same as you, but he's seen Book of Eli and you haven't. Since he liked it, you'll like it.
  • Calculated soulmate - draws on a bunch of people who kind of match your tastes, formulates a "soulmate", uses that as a basis. So how closely tastes have aligned in the past. Weighted average.
  • These were the ideas behind Cinematch. Algorithm does really badly with movies that have been rarely watched.
  • Also, movies that elicit love/hate reactions are biggest source of error. Popular but weird movies. Think Napoleon Dynamite.
  • If you were to take mean scores of everything and use those as a basis for predicted ratings, maximum error in accuracy was determined to be +/- 1.05040 stars. (Netflix star system)
  • Maximum error in accuracy on the Cinematch algorithm ("soulmates") was determined to be 0.9525 stars.
  • So Cinematch was a 9.6% increase in accuracy over the trivial mean score method. And I think the stat about how much this increased viewership on the "top" movies was 1200%.
  • Also in 2006 - Netflix announces $1M bounty for a further 10% improvement.
  • 2009 - the Netflix bounty is awarded, team managed to achieve a 10.09% accuracy improvement.

Looking at Movie Data Differently

  • Before: solid assumptions, you have a certain taste, your taste dictates ratings for unwatched content, after you watch it this will be clear. This is largely wrong.
  • Your taste changes over time (even day to day), also affected by how many ratings you've given that day, and average rating for the day.
  • Taking these things into account, your time-dependent rating tendencies, makes for a more accurate algorithm than Cinematch without even considering movie content. I thought this was significant - without even considering the movies, but instead looking at your rating behavior surrounding those movies, made for a more accurate algorithm.
  • We cannot explicitly compare a movie with all others we've seen.
  • Environmental factors play a huge role.
  • Some people are followers, some people behave like "hipsters", once you kinda figure a person out you can make accurate predictions.
  • Take "Music Lab", an experimental website for downloading music. When other people's ratings are invisible, you get more or less equal ratings. When other people's ratings (or the illusion of other people's ratings) are shown and quantified, things get interesting.
  • Social influence plays a huge role in what will be a hit, what will be a miss.

Diving in Further

  • Degree of liking is difficult to predict consistently and accurately with a number.
  • The difficulty in answering "What are your top 20 movies?" (if you really sit down and think about it) illustrates how degree of liking is sensitive and vague.
  • "Enjoyment" from a movie is a very high-dimensional concept. There are movies that yield completely different flavors of reaction from you, and how is that supposed to be broken down into 4.3, etc.
  • For the most part, it's straightforward to compare just 2 movies. Fully analyze all the comparisons to see where things ultimately stand.
  • If Star Wars > Indiana Jones and Indiana Jones > Troll 2, Star Wars > Troll 2 can be inferred. Simple statistics.
  • Elo rating system (The Social Network's "Facemash")
  • Bayesian ranking algorithms (Microsoft utilized it for Halo matchmaking)
  • Elo and Bayesian were originally applied to chess.
  • Asking about movies you're really uncertain about is better than asking what you probably know - produces more valuable data.
  • Quantifying human reactions is hard.
  • Many comparisons for a movie will average out environmental factors.
  • Don't necessarily want to average out social influence, that's part of nature.
  • Most important part of data science is design - figuring out the right questions to ask.
  • SQL + Python for this movie project

Hoping this isn't too hard to follow.


The Uncubed people are launching a service that seems like it will be similar to Treehouse - a pay-monthly, no-contract, learn-startup-topics-by-video-heavy-modules service. It's called Uncubed Edge. Uncubed Edge While attendees were able to sign up already for locked-in $5/month, when the public launch happens it's still only $20/month. If the content there is anything like today's event, it's worth every penny. Screen Shot 2014-11-14 at 7.25.41 PM At $5 signing up was a no-brainer for me - may write a full review in the future. No modules are up currently. Update: An Uncubed Edge review