PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 24th of March, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:00:32] <pengwn> I would like some info on whether to use record references or record embedding on shard instances which will give better performance?
[02:41:03] <alexi5> hello
[04:12:32] <pengwn> Hi.
[04:12:57] <pengwn> are my posts being seen by everyone?
[04:57:29] <Pengwn2> Hi
[04:57:50] <Pengwn2> Can you please let me know how to configure kvirc to use this mongodb group?
[04:58:42] <pengwn> Ok it seems to be working I am able to see these in my KVIrc client.
[06:10:54] <Brij-DBA> Hi
[06:12:04] <Brij-DBA> I am looking for mongodb setup guide on linux. Can somebody pls share me easy step to set it up?
[06:20:06] <mboman> Brij-DBA, apt-get install mongodb worked for me, could you be more specific?
[06:38:36] <Brij-DBA> I am new to mongodb, so just need easy steps if any one has those handy, to setup one mongo db env on linux.
[06:52:14] <mboman> Brij-DBA, which Linux distr?
[06:53:50] <mboman> Brij-DBA, have you checked the official manual? http://docs.mongodb.org/manual/administration/install-on-linux/
[07:44:53] <Brij-DBA1> yeah, thanks thats helpful. will try do the setup accordingly. thanks again!
[07:54:36] <Mothership> hello
[07:55:15] <Mothership> is it possible in mongo.config to add more than one path to db files?
[08:37:04] <rAg3nix> hi , i am new to mongodb, i am trying to import mysql database using mongoimport. but there is a limitation of 16mb whereas my db size in json is 350 mb. can anyone please tell me how to do it ?
[08:42:44] <joannac> rAg3nix: the maximum document size is 16mb
[08:43:10] <joannac> surely you're not storing the whole thing as one document...
[08:43:30] <joannac> Mothership: no. what's the use case?
[08:44:28] <rAg3nix> i am trying to do that i guess, and its not happening for obvious reasons !! how do i do it ? i tried using gridfs !! it stores the whole file how do i run query in that?
[08:44:56] <rAg3nix> joannac:
[08:45:38] <Mothership> joannac, I want to commit my project to git with db files included, so when I pull it, I can automatically get the updated db files without the need to copy them to the main db path
[08:46:25] <Mothership> I know I can use bashscript, but I just thought if it's possible to add more db paths
[08:49:45] <joannac> Mothership: umm, how big is your database? I can't see that scaling well
[08:50:40] <joannac> rAg3nix: you don't? import your rows as documents or something. you're going to get terrible performance if you store your whole sql database as a mongodb gridfs file...
[08:52:47] <Mothership> joannac, its just for pre-production
[08:52:52] <rAg3nix> joannac: no no , what i am trying to do is , i have the base database , and there are 3 tables that i need to migrate to mongodb on a master slave replica first table was done , second the one i am trying to do is 350 mb and is refusing to go as a single document. how do i resolve it or how do i do it ?
[08:57:47] <joannac> rAg3nix: you have a single table which is >350mb which you are trying to insert as a single mongodb document
[08:58:01] <rAg3nix> joannac: yes
[09:00:06] <joannac> rAg3nix: and you're importing CSV?
[09:00:17] <rAg3nix> joannac: json
[09:02:25] <joannac> I'm just confused why you exported your whole table as one single JSON document
[09:04:01] <Mothership> is underscore the most used naming convention for mongodb?
[09:04:01] <joannac> Mothership: I still don't understand your use case. You want to clone your database without using mongodump/restore?
[09:04:24] <Mothership> joannac, don't mind it, it was more of a theoretical quesiton
[09:07:26] <rAg3nix> joannac: how should i do it then ?
[09:13:04] <the8thbit> I asked this in #Node.js, but it's probably better suited for here:
[09:13:05] <the8thbit> How would I do something like Foo.findOne( { fieldA: bar OR fieldB: bar }, function( err, foo ) { //code } ); ?
[09:14:29] <Nodex> use an $or :)
[09:15:00] <Nodex> http://docs.mongodb.org/manual/reference/operator/query/or/#op._S_or
[09:15:48] <the8thbit> Nodex: Thanks! This will work in Mongoose you think?
[09:18:53] <hipsterslapfight> it will
[09:26:13] <Nodex> the8thbit : I would think so yes
[09:28:45] <the8thbit> Nodex: Thanks. If I want to do a query from my client, what would be the easiest way to do this? Just set up a route for a .find() on the particular collection, and then use ajax to pass it a query?
[09:32:42] <amitprakash> Hi, given an _id, is it possible for me to look for a document across multiple collections matching _id
[09:36:39] <Nodex> the8thbit : tht's down to your app
[09:36:52] <Nodex> amitprakash : not nativly no
[09:37:17] <the8thbit> Nodex: I guess what I was asking is 'is this a reasonable approach', though I suppose you answered my question :)
[09:39:33] <Nodex> if you can't see the security hole in that method then you might be biting off more than you can chew
[09:40:34] <the8thbit> hm
[09:40:50] <the8thbit> Nodex: What security hole?
[09:42:28] <Zelest> what if someone modify your query?
[09:44:08] <the8thbit> Zelest: Then they would just get the wrong result?
[09:44:48] <amitprakash> wrong result could mean loss of privacy for someone
[09:45:03] <the8thbit> Ahhh, I see
[09:45:13] <the8thbit> In this particular case, I'm not querying private info
[09:45:32] <amitprakash> In general this is a bad idea
[09:45:41] <Nodex> it's not really the point. Build and follow good practices
[09:45:44] <amitprakash> Though I do not have the context for what you guys are discussing
[09:46:01] <the8thbit> Nodex, amitprakash: Well, I'm looking for good practices :)
[09:48:10] <the8thbit> amtiprakash: Basically, something similar to a reddit profile, where I have a list of comments made by all users, and I need to list just the ones made by the user who owns the profile
[09:50:13] <the8thbit> I have authentication, so for private info, I could just make sure that the user is authenticated using the same username as they make the query for?
[09:50:57] <amitprakash> Why does a user make a query at all
[09:51:05] <amitprakash> Queries should be server side
[09:51:31] <amitprakash> In general, do not expose your db to the public
[09:52:03] <amitprakash> Nodex, so theres nothing short of cycling though multiple collections and looking for the _id
[09:52:25] <amitprakash> Nodex, also, can two documents share the same _id when they lie in different collections
[09:55:53] <the8thbit> amitprakash: We're talking about the same thing, yes? I set up a route. say /mongo/userComments that takes a string or two as a query. I then have a mongoose query server-side that takes the string I sent the route and does the actual querying
[09:57:44] <Nodex> amitprakash : I am not sure on that one, I don't see why not
[11:00:17] <amitprakash> the8thbit, you're talking about searching on a string/
[11:00:26] <amitprakash> s/\/?
[11:00:30] <the8thbit> yeah
[11:00:39] <amitprakash> right, so thats okay
[11:00:55] <amitprakash> a user searching for a substring is perfectly fine
[11:01:14] <the8thbit> oh ok
[11:01:16] <amitprakash> just ensure that the string is escaped
[11:01:36] <the8thbit> how do I do that?
[11:02:03] <Nodex> there is NO injection in mongo queries - only updates need sanitizing
[11:02:34] <the8thbit> oh cool
[11:06:18] <amitprakash> Nodex, https://www.idontplaydarts.com/2010/07/mongodb-is-vulnerable-to-sql-injection-in-php-at-least/
[11:12:16] <Nodex> updates I said
[11:14:12] <Nodex> + User is using Mongoose :)
[11:16:39] <morfin> hello
[11:17:13] <morfin> can anybody tell me is there functional indexes?
[11:20:27] <Nodex> what is a functional index?
[11:24:56] <morfin> something like hmm
[11:25:05] <morfin> MyFunc(field)
[11:25:47] <morfin> which is being calculated when i add new document to simplify search when i got lots of documents and need to search by that criteria
[11:26:09] <Nodex> then no
[11:26:52] <Nodex> there is no functions in Mongodb - not in the sense of things like lower(foo) == lower('Foo')
[11:38:16] <morfin> so how fast will be search with complicated conditions?
[11:39:14] <Nodex> how long's a piece of string... depends on indexes, data lots of things
[11:44:07] <morfin> but what if i need compine some strings somehow
[11:47:51] <hjb> howdy. according to https://jira.mongodb.org/browse/SERVER-1625 mongo core server doesn't work on big-endian
[11:47:58] <hjb> but what's about the client?
[11:48:15] <hjb> i can't find a solaris sparc pre-built client anywhere :-/
[11:50:43] <Nodex> morfin : that's down to your app
[12:22:04] <hjb> noone?
[12:22:58] <Derick> hjb: depends on the language driver really
[12:23:15] <Derick> the PHP one should work
[12:25:54] <hjb> Derick: well, ok. php is no option for us. i guess the java client should also work?
[12:26:14] <Derick> it would surprise me if it didn't, but I can't give you an authorative answer
[12:33:19] <hjb> Derick: ok, np. i'll just check it
[12:37:18] <traplin> Meteor.users.find({"services.facebook.name":friendName}}).fetch();, with this statement how would i exclude any documents that the field "_id" equals "personID"
[12:37:45] <Derick> that has a typo
[12:37:52] <Derick> one } too many after friendName
[12:38:27] <traplin> oh, missed that, thanks
[12:38:29] <Derick> {"services.facebook.name":friendName, '_id' : { '$ne' : personID }} ought to do it
[12:39:50] <traplin> ah thanks Derick ! that works perfectly
[12:40:02] <traplin> was getting confused whether to use $ne / $not and where to put them
[12:41:23] <hjb> Derick: but for the "standard" command line client mongo there's little chance, right?
[13:07:18] <katspaugh> Hi! What is the recommended way to handle connections to MongoDB, when using mongodb-native driver?
[13:08:00] <katspaugh> Should I open a single connection and reuse it throughout my app, or open new connections to the DB each time I need to query or insert?
[13:08:57] <katspaugh> Right now, I’m reusing a single connection, never closing it. I experience hangings on bulk inserts and querying collections.
[13:13:18] <Nodex> I open one at the start and don't have a problem with throughput
[13:18:46] <katspaugh> Nodex: thanks! It must be something in my app, then.
[13:21:18] <LoneSoldier728> am i calling populate wrong http://pastebin.com/S2QmZwPv it does not grab the results even though they are there
[13:27:07] <LoneSoldier728> http://stackoverflow.com/questions/22610776/populate-issue-with-mongodb
[14:28:12] <cfinley> Could anyone explain to me why the reduce method needs to output a value that can run through reduce again and output the same value?
[14:31:34] <kali> cfinley: it enables to reduce in several stages. so you can have a first reduce step on each shard of your sharded database, then one on a unique node to finish the work. it also allows to avoid reduce steps that would require an enormous values array, sometimes exceeding the memory
[14:49:07] <cfinley> kali: thanks!
[15:00:45] <cfinley> If the map/reduce is running on one machine should reduce run multiple times?
[15:05:07] <G1eb> Hello, is it possible to have an array with max length, meaning that when I push something new to it, it automatically deletes oldes item and adds the new one
[15:05:24] <kali> i think the behaviour is to reduce by chunk of 1000 docs, so yes, it can. you must not make assumptions here anyway
[15:05:44] <G1eb> oh wait I think I found somehing like a slice param on update
[15:05:46] <kali> cfinley: also, be aware the reduce may not be called if the values[] has just one element
[15:06:02] <kali> G1eb: yep, $push with $sort and $slice
[15:06:24] <cfinley> kali: I did notice that. Does that mean any statistics should be done in the finalize?
[15:06:25] <G1eb> awesome, I knew mongo had something automatic for this
[15:06:46] <kali> G1eb: it's quite recent, to be honest :)
[15:06:52] <cfinley> kali: I'm writing a method that takes user events and generates user sessions for analytics.
[15:08:01] <G1eb> kali, great functionality imo, so practical sometimes
[15:11:55] <Nodex> https://gist.github.com/dhh/9741477# LOL
[15:12:53] <cheeser> stupid people--
[15:14:05] <FrancescoV> hi all, I'm new to mongodb, i need to create the /data/db folders?
[15:14:11] <Nodex> terrible, ddos as a money making tool
[15:14:18] <Nodex> FrancescoV : depends how you installed it
[15:14:41] <FrancescoV> Nodex: I have /username/Development/mongodb
[15:15:30] <Derick> FrancescoV: you can pass the path you want to the command line when you start mongodb
[15:15:44] <Derick> or put it in the .ini file
[15:16:00] <Derick> (in mongodb.conf I mean)
[15:16:05] <Derick> dbpath=/home/derick/mongodb-demo
[15:16:08] <Derick> is what I have in there
[15:17:32] <kali> ha, they've found the plane. so much for the black hole theory
[15:18:07] <G1eb> popular theory here was that it was going to land in the hague tomorrow ;)
[15:18:26] <kali> why the hague ? :)
[15:18:31] <Nodex> where they find it?
[15:18:36] <Nodex> + did
[15:19:08] <Nodex> http://www.bbc.co.uk/news/world-asia-26716572
[15:19:20] <G1eb> because of that nss 2014 meeting
[15:19:28] <kali> Nodex: it's actually not that clear. they have officialy decided that the debris in the south indian ocean have to be it, but i don't think there is an actual physical proof yet :)
[15:20:01] <Nodex> :/
[15:20:26] <kali> i *do* like the "west of perth" phrase
[15:20:37] <kali> that's 6k km west of perth :)
[15:21:00] <kali> but it sound like you could nearly see it from perth
[15:26:33] <hipsterslapfight> it's flat enough you can see for miles kali! :v
[15:26:38] <hipsterslapfight> (i don't miss living in perth)
[16:13:18] <G1eb> kali, Im trying this $push $sort and $slice approach but keep getting: RangeError: Maximum call stack size exceeded =/
[16:13:25] <G1eb> any ideas what can possibly trigger it?
[16:22:37] <viktor_> How do I make a query on several _id's and get the result as an array with documents <in the same order> as my list of _id's? I've tried with $or and $in ...
[16:24:12] <rkgarcia> viktor_: use sort
[16:25:34] <viktor_> rkgarcia: How do I use sort in this case? I only know how to sort by some key.
[16:33:03] <riceo> hi everyone. quick sharing implementation question. are there any best practices around config server placement? I was thinking of running the config servers on my shard servers
[16:33:12] <riceo> *sharding
[16:43:10] <betty> 04C@tB1rd
[16:43:19] <rkgarcia> betty: wrong place :P
[16:43:29] <betty> oops
[16:43:37] <cheeser> i have the same combination on my luggage!
[16:44:26] <rkgarcia> viktor_: you need some code then :)
[16:45:28] <viktor_> rkgarcia: hehe yes, it feels like this is something that many people will want to do, so I'm wondering if there's a recommended way of doing it. I dont see an obvious and nice way of doing it.
[16:57:34] <the8thbit> How would query for info in a member of an array? Also, is there any easy way to query for data in _every_ member of an array?
[16:59:01] <rkgarcia> the8thbit: sounds like elemMatch operator
[16:59:23] <the8thbit> rkgarcia: thanks!
[16:59:34] <rkgarcia> the8thbit: you are welcome
[17:00:42] <kali> G1eb: how many items are there in your array ?
[17:01:24] <G1eb> well, currently 0..
[17:01:35] <G1eb> could that be it? >.<
[17:01:49] <kali> G1eb: no idea honestly
[17:01:58] <G1eb> let me try something like upsert and safe
[17:02:19] <kali> riceo: it's a ok place .
[17:35:03] <ShortWave> Hrm
[17:35:44] <rkgarcia> hello ShortWave
[17:35:57] <ShortWave> Is there a way to append to a array in a subdocument? I can't seem to get that to work right.
[17:37:55] <kali> ShortWave: show us what you are trying
[17:38:44] <ShortWave> Ah, I'm in the middle of a discussion about how to handle this part of the data model, one second. The exact use case here is in some doubt, apparently.
[17:38:58] <rkgarcia> ShortWave: $set with "subdocument.value" ?
[17:39:32] <ShortWave> Wasn't working, but now I think it was a criteria problem.
[17:39:49] <ShortWave> I tested it bare on a new collection.
[18:12:28] <faef> if I want a hidden secondary to do a fresh initial sync, I delete all the data in dbpath: does this include the local db?
[18:15:10] <decompiled> faef, you can. how large of dataset do you have?
[18:15:30] <decompiled> resync'ing for anything sort of large sucks
[18:16:04] <faef> 1TB, it's annoying but it's the only way to get a compacted database without downtime
[18:16:43] <faef> we just deleted a bunch of indices on the primary, so we want to reclaim that disk space. choice is either repairdatabase directly on the secondary or just delete the data and resync
[18:18:15] <kali> faef: yes, delete the local database too
[18:18:28] <kali> faef: literally remove everything from the dbpath directory
[18:19:07] <decompiled> have you done this before?
[18:19:28] <kali> decompiled: me ? yeah. dozens of times
[18:19:31] <decompiled> coo
[18:19:37] <decompiled> or faef
[18:19:46] <decompiled> because 1TB might never fully sync
[18:19:56] <kali> it depends on the oplog depth
[18:20:16] <decompiled> there are other ways if you have 2 secondaries
[18:20:20] <decompiled> that are quicker
[18:20:49] <kali> quicker, but certainly not easier
[18:20:57] <decompiled> I mean, it is easier as well
[18:21:01] <decompiled> less stressful
[18:21:35] <faef> yep, tons of times, we have a very large oplog
[18:23:05] <riceo> @kali: thanks
[18:45:38] <NaN> how do I change the "_id, value" schema from a mapreduce output?
[18:46:16] <rkgarcia> NaN: in the map function >_<
[18:47:57] <kali> nope. not in map, not anywhere else
[18:48:01] <NaN> '_id', 'value' literally, not key, value
[18:48:26] <NaN> so do I need to aggregate an project over the new data?
[18:49:33] <kali> NaN: maybe... i'm not sure what you're trying to achieve
[18:51:45] <NaN> I'm creating a new collection from a nested document, $project worked but because I need to reduce the docs I used mapreduce
[18:52:04] <NaN> it worked, all it's ok, but I don't get how do I change that _id, value schema from the output
[18:53:53] <NaN> I thought about finalize, but it's the same, "value" gets the finalize results and "_id" the initial maped data
[18:54:00] <kali> yeah
[18:54:07] <NaN> so I will go with $project again
[18:55:03] <NaN> will format the docs with finalize and extract the "values" with $project
[18:55:17] <rkgarcia> NaN: http://stackoverflow.com/questions/8416262/how-to-change-the-structure-of-mongodbs-map-reduce-results
[18:57:27] <Honeyman> Hello. If I have 4 equal hosts, what is the best idea for a replica set configuration to utilize them fully?
[18:57:34] <Honeyman> The docs say if I have an even number (4) of replica set hosts, I should run the arbiter, but it shouldn't be executed on the same host as any of the rs hosts, so that calls for the fifth host...
[18:57:40] <Honeyman> Am I right that the best I can do is "1 primary, 2 regular secondaries, 1 non-voting secondary"?
[18:58:00] <cheeser> probably
[18:58:40] <Honeyman> Is that at least a correct a reasonable configuration?
[18:58:51] <Honeyman> and a reasonable
[18:58:51] <NaN> it doesn't says how to do it, but it has a nice read, thanks rkgarcia
[18:59:22] <LoneSoldier728> anyone know how to use populate correctly
[18:59:45] <LoneSoldier728> for mongoose
[19:03:02] <G1eb> in the docs on http://docs.mongodb.org/manual/reference/operator/update/push/#up._S_push they use such a wierd notation for push with slice at the bottom
[19:03:26] <G1eb> i cant seem to work around that $each as I only need to push 1 item
[19:03:42] <G1eb> bleh
[19:18:46] <ShortWave> Right so
[19:18:58] <ShortWave> I've got a table with arrays of 2D locations.
[19:19:10] <ShortWave> ensureIndex( 'location' : '2d' ) worked fine and didn't throw errors...
[19:56:36] <ShortWave> So just so I understand
[19:56:56] <ShortWave> when I use geoNear, that's running a query using the 2d geospatial index that I've created on my collection, yes?
[19:58:06] <segphault> anyine here has run tokumx in production ?
[20:01:25] <benth> does mongodb do anything special for sparse arrays? (i.e. arrays where most values are null or undefined)
[20:11:53] <ShortWave> So, better to use a dbCommand (like geoNear) vs. a straightup geospatial query?
[20:48:30] <skullz> Hey guys, how should I define index in an array of emails?
[20:48:36] <skullz> (using mongoose)
[20:48:48] <kudos> is it a bad sign that I need to use a hint to make mongodb not sort super slowly?
[22:43:06] <alexi5> hello
[22:45:23] <rkgarcia> hello alexi5
[22:46:10] <alexi5> i am new to mongodb and nosql databases
[22:46:35] <alexi5> an is currently going through the tutorial
[22:46:46] <BadHorsie> [,gl 3
[22:47:18] <rkgarcia> alexi5: then?
[22:47:21] <alexi5> based on your experience what advantages does this type of database offer you over a relational database ?
[22:55:46] <scottyob> G'Day everyone. Do you guys help with minimongo too?
[23:02:30] <LoneSoldier728> hey so I am getting back this from populate... http://pastebin.com/sigfqXKu and trying to call results.stories say is undefined... even results.stories[0] says cant get 0 of undefined ... and also, there should be two objects in there not one... what am i doing wrong ?
[23:03:31] <ShortWave> So just out of curiousity...why is it "ensureIndex"? is that just some functional naming with more meaning than normal?
[23:03:32] <LoneSoldier728> anyone have a clue about had it with mongodb today
[23:06:07] <ShortWave> I'll give it a look, but I'm no expoert
[23:06:33] <ShortWave> results[0].stories[0] I think
[23:06:43] <ShortWave> Results is presented as an array
[23:06:58] <ShortWave> Unless I'm reading that returned structure wrong.
[23:07:11] <ShortWave> but it looks like it ends in ] to me
[23:07:41] <ShortWave> let me know if that solves it...or not. I'll look deeper at it if you still have a problem