Riak TS Agility on handling Petabytes of Data

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Riak TS Agility on handling Petabytes of Data

rajaa.krishnamurthy

Dear Team,

 

As a process of validation, we would like to know certain aspects w.r.t Riak TS. Let's say per day 100TB real time data we get in our application which crossed around more than petabytes of data in sometime.

Would RIAK TS be an Ideal Choice for the above application ground. If so, 1. How it will handle the data. 2. What is the performance of Reads/Writes that we can expect. 3. How fast RIAK TS would be compared to MongoDB, Cassandra & Druid

 

 

Thanks & Regards,

Rajaa Krishnamurthy,

IoT Gateway Developer

Mob: +91 9940318287 | E-Mail: [hidden email]

TCS WeCare: BTnet Tel : 661 8179 | Tel: +44 (0) 121 311 8179 - WeCare team can be reached if primary on-call support number is not reachable. Available 24 X 7

 

Blue_Pil

 


_______________________________________________
riak-users mailing list
[hidden email]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Reply | Threaded
Open this post in threaded view
|

Re: Riak TS Agility on handling Petabytes of Data

Alexander Sicular-2
Hi Rajaa,

What's your retention policy? At the moment, TS supports a global TTL. What's your read pattern? Is this a metrics or logging use case, aka can you downsample.

Thanks,
Alexander

On Fri, Nov 18, 2016 at 03:25 <[hidden email]> wrote:

Dear Team,

 

As a process of validation, we would like to know certain aspects w.r.t Riak TS. Let's say per day 100TB real time data we get in our application which crossed around more than petabytes of data in sometime.

Would RIAK TS be an Ideal Choice for the above application ground. If so, 1. How it will handle the data. 2. What is the performance of Reads/Writes that we can expect. 3. How fast RIAK TS would be compared to MongoDB, Cassandra & Druid

 

 

Thanks & Regards,

Rajaa Krishnamurthy,

IoT Gateway Developer

Mob: +91 9940318287 | E-Mail: [hidden email]

TCS WeCare: BTnet Tel : 661 8179 | Tel: +44 (0) 121 311 8179 - WeCare team can be reached if primary on-call support number is not reachable. Available 24 X 7

 

 

_______________________________________________
riak-users mailing list
[hidden email]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

_______________________________________________
riak-users mailing list
[hidden email]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

image001.png (15K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

RE: Riak TS Agility on handling Petabytes of Data

rajaa.krishnamurthy

Hi Alex,

 

Thanks for your swift response.

 

I will try to elaborate the use case to you so as to get the context.

 

Our application is related to Telemetry. We want to build a system which actually be able to handle writes without any data loss. We are expecting 100TB data per day. Data retention is 90days. In this application, we would be reading the records and may show up in portal/use for analytics. Would RIAK TS be able to handle this kind of data?

 

As far as I understand, GLOBAL TTL is something which enables the table level expiration policy. Please correct us if wrong.

 

Thanks & Regards,

Rajaa Krishnamurthy,

IoT Gateway Developer

Mob: +91 9940318287 | E-Mail: [hidden email]

TCS WeCare: BTnet Tel : 661 8179 | Tel: +44 (0) 121 311 8179 - WeCare team can be reached if primary on-call support number is not reachable. Available 24 X 7

 

Blue_Pil

 

From: Alexander Sicular [mailto:[hidden email]]
Sent: 18 November 2016 10:03
To: Krishnamurthy,R,Rajaa,TAB13 C; [hidden email]
Cc: Balaji,H,Hari,TAB13 C
Subject: Re: Riak TS Agility on handling Petabytes of Data

 

Hi Rajaa,

What's your retention policy? At the moment, TS supports a global TTL. What's your read pattern? Is this a metrics or logging use case, aka can you downsample.

Thanks,
Alexander

 

On Fri, Nov 18, 2016 at 03:25 <[hidden email]> wrote:

Dear Team,

 

As a process of validation, we would like to know certain aspects w.r.t Riak TS. Let's say per day 100TB real time data we get in our application which crossed around more than petabytes of data in sometime.

Would RIAK TS be an Ideal Choice for the above application ground. If so, 1. How it will handle the data. 2. What is the performance of Reads/Writes that we can expect. 3. How fast RIAK TS would be compared to MongoDB, Cassandra & Druid

 

 

Thanks & Regards,

Rajaa Krishnamurthy,

IoT Gateway Developer

Mob: +91 9940318287 | E-Mail: [hidden email]

TCS WeCare: BTnet Tel : 661 8179 | Tel: +44 (0) 121 311 8179 - WeCare team can be reached if primary on-call support number is not reachable. Available 24 X 7

 

 

_______________________________________________
riak-users mailing list
[hidden email]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


_______________________________________________
riak-users mailing list
[hidden email]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com